Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
翻译:拒绝扩散概率模型(DDPMs)在没有对抗性培训的情况下实现了高质量的图像生成,但是它们需要模拟Markov链,以便采取许多步骤来制作样本。为了加速取样,我们展示了一个更高效的迭代隐含概率模型(DDIMs),这是与DDPMs相同的培训程序的一种更高效的迭代隐含概率模型。在DDPMs中,基因化过程被定义为马尔科维亚扩散过程的逆向。我们建造了一类非马尔科维亚扩散过程,它导致相同的培训目标,但其反向过程可以更快地从中取样。我们从经验上证明DDIMs可以产生高质量样本,10美元到50美元,比DDPMs速度更快,使我们能够交换样品质量的计算,并在潜层空间直接进行具有内涵意义的图像互划。