Recent progress with conditional image diffusion models has been stunning, and this holds true whether we are speaking about models conditioned on a text description, a scene layout, or a sketch. Unconditional image diffusion models are also improving but lag behind, as do diffusion models which are conditioned on lower-dimensional features like class labels. We propose to close the gap between conditional and unconditional models using a two-stage sampling procedure. In the first stage we sample an embedding describing the semantic content of the image. In the second stage we sample the image conditioned on this embedding and then discard the embedding. Doing so lets us leverage the power of conditional diffusion models on the unconditional generation task, which we show improves FID by 25-50% compared to standard unconditional generation.
翻译:近期,针对条件图像扩散模型的进展一直很惊人,无论我们是在谈论基于文本描述、场景布局还是草图条件的模型。无条件图像扩散模型也在不断改进,但与条件模型相比还有一定差距,例如类别标签等低维特征的条件模型的差距更大。本文提出一种两阶段采样过程的方法来弥合条件模型与无条件模型之间的差距。在第一阶段,我们采样一个描述图像语义内容的嵌入式表示。在第二阶段,我们在嵌入式表示的条件下采样图像并抛弃嵌入式表示。通过这种方式,我们可以利用条件扩散模型的能力来完成无条件生成任务,我们发现这种方法可将 FID 提高 25-50%,相对于标准的无条件生成模型。