Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to use the model as part of a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum. Code is available at https://github.com/google-research/vdm .
翻译:以扩散为基础的基因化模型展示了一种感知性令人印象深刻的合成能力,但是它们也可以是极有可能的模型吗? 我们以肯定的方式回答这个问题,并引入一系列基于扩散的基因化模型,在标准图像密度估计基准中获得最新的可能性。 与其他基于扩散的模型不同,我们的方法允许与模型的其余部分一道,高效优化噪音时间表。 我们显示,变式的低约束(VLB)简化到一个非常短的表达方式,即传播的数据的信号-噪音比率,从而改进我们对模型类的理论理解。 我们利用这一洞察,证明在文献中提议的若干模型之间具有等同性。 此外,我们显示连续时间VLB与噪音时间表是互不相同的,但信号-噪音比率与模型/货币化的其余部分。 这使得我们能够学习一个能够最大限度地减少由此产生的VLB的估算值差异的噪声表,导致更快的优化。 将这些进展与建筑改进结合起来,我们获得“比值-艺术”的模型- 更新率率,我们常常将这些精确的模型和图像损失率标比。