Diffusion models have made significant breakthroughs in image, audio, and video generation, but they depend on an iterative generation process that causes slow sampling speed and caps their potential for real-time applications. To overcome this limitation, we propose consistency models, a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either as a way to distill pre-trained diffusion models, or as standalone generative models. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step generation. For example, we achieve the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained as standalone generative models, consistency models also outperform single-step, non-adversarial generative models on standard benchmarks like CIFAR-10, ImageNet 64x64 and LSUN 256x256.
翻译:集成模型在图像、音频和视频生成方面取得了重大突破,但是它们依赖于一个迭代生成过程,导致取样速度缓慢,并限制其实时应用的潜力。为了克服这一限制,我们提议了一致性模型,这是在没有对抗性培训的情况下实现高采样质量的基因化模型的新组合。它们通过设计支持快速的一步生成,同时仍然允许几步采样以交换样本质量的计算。它们还支持零点数据编辑,如图像油漆、色化和超分辨率,而无需就这些任务进行明确培训。调和模型可以作为蒸馏预先培训的传播模型或独立基因化模型加以培训。我们通过广泛的实验,表明它们优于现有的一步和几步推广模型的蒸馏技术。例如,我们在CIRA-10上实现了3.55的新型FID,在一步一代上实现了6.20的图像网络64x64x64。当被培训为独立基因化模型时,一致性模型也超越了单步制的单步式、非敌对式UN64-10型图像模型,例如独立式的64FIRS-10模型。</s>