Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to turn the model into a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum.
翻译:以扩散为基础的基因模型展示了一种感知令人印象深刻的综合能力,但是它们也可以是极有可能的模型吗? 我们以肯定的方式回答这个问题,并引入一系列基于扩散的基因模型,在标准图像密度估计基准中获得最新的可能性。 与其他基于扩散的模型不同,我们的方法使得噪音时间表与其他模型的其余部分一起能够高效优化。我们显示,变式的低约束(VLB)简化成一个非常短的表达方式,在传播的数据的信号与噪音比率方面表现得非常短,从而改进我们对模型类模型的理论理解。我们利用这一洞察力,证明在文献中提议的几种模型之间是等效的。此外,我们显示连续时间VLB与噪音时间表是互不相同的,但信号与噪音比值比值的比率除外。这使我们能够学习一个噪音表,以尽量减少由此产生的VLB估计值的偏差,从而导致更快的优化。把这些进展与近的建筑改进结合起来,我们获得了对模型的理论性理解,我们常常将模型的模型转化为图像率的模型,我们展示了多少的压缩率。