Denoising diffusion probabilistic models (DDPMs) (Ho et al. 2020) have shown impressive results on image and waveform generation in continuous state spaces. Here, we introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. 2021, by going beyond corruption processes with uniform transition probabilities. This includes corruption with transition matrices that mimic Gaussian kernels in continuous space, matrices based on nearest neighbors in embedding space, and matrices that introduce absorbing states. The third allows us to draw a connection between diffusion models and autoregressive and mask-based generative models. We show that the choice of transition matrix is an important design decision that leads to improved results in image and text domains. We also introduce a new loss function that combines the variational lower bound with an auxiliary cross entropy loss. For text, this model class achieves strong results on character-level text generation while scaling to large vocabularies on LM1B. On the image dataset CIFAR-10, our models approach the sample quality and exceed the log-likelihood of the continuous-space DDPM model.
翻译:隐性扩散概率模型(DDPMs)(Ho等人,2020年)在连续状态空间的图像和波形生成中展示了令人印象深刻的结果。在这里,我们引入了分分解分解分解分解分解分解分解分解分解分解分解分解分解分解概率模型(D3PMs),对离散数据进行扩散型模型(D3PMs)的类似基因化模型,通过超越具有统一的过渡概率的腐败过程,将Hoogeboom等人(2021年)的多位分解扩散模型(DDPMs)(这包括:在连续空间模仿高斯内核内核的过渡矩阵、基于嵌入空间中近邻的矩阵和引入吸收状态的矩阵。第三个模型允许我们在扩散模型和自动递增和面具分解分解分解分解的基因模型(DPM)之间进行连接。我们所选择的过渡矩阵是一个重要的设计决定,可以改善图像和文本域中的结果。我们还引入一个新的损失函数,将变低约束与辅助的交叉丢失损失结合起来。对于文字,这个模型,这个模型在字符级的生成模型生成中取得强大的文本生成模型生成的模型的模型生成的强大结果,同时,同时,同时将LM1-10FARS-S-M-S-M-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-I-S-S-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I