Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to use the model as part of a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum.
翻译:以扩散为基础的基因模型展示了一种感知令人印象深刻的合成能力,但是它们也可以是极有可能的模型吗?我们用肯定的方法回答这个问题,并采用一系列基于扩散的基因模型,在标准图像密度估计基准中获得最新的可能性。与其他基于扩散的模型不同,我们的方法允许与模型的其余部分一道,高效优化噪音时间表。我们显示,低变式约束(VLB)简化成一个非常短的表达方式,在传播的数据的信号对噪音比率方面,从而改进我们对模型类模型的理论理解。我们利用这一洞察,证明在文献中提议的几种模型之间是等同的。此外,我们显示连续时间VLB与噪音时间表是互不相同的,但信号对音比对音率与模型的其余部分。这使我们能够学习一个噪音时间表,从而将由此产生的VLB估计值的差异最小化,导致更快速的优化。把这些进展与建筑改进结合起来,我们获得了比值更接近的精确度模型,我们常常以最精确的精确度模型和最接近的精确度基准来显示我们如何以最快速地展示这些最精确的图像损失基准。