Diffusion probabilistic models have been shown to generate state-of-the-art results on several competitive image synthesis benchmarks but lack a low-dimensional, interpretable latent space, and are slow at generation. On the other hand, standard Variational Autoencoders (VAEs) typically have access to a low-dimensional latent space but exhibit poor sample quality. We present DiffuseVAE, a novel generative framework that integrates VAE within a diffusion model framework, and leverage this to design novel conditional parameterizations for diffusion models. We show that the resulting model equips diffusion models with a low-dimensional VAE inferred latent code which can be used for downstream tasks like controllable synthesis. The proposed method also improves upon the speed vs quality tradeoff exhibited in standard unconditional DDPM/DDIM models (for instance, FID of 16.47 vs 34.36 using a standard DDIM on the CelebA-HQ-128 benchmark using T=10 reverse process steps) without having explicitly trained for such an objective. Furthermore, the proposed model exhibits synthesis quality comparable to state-of-the-art models on standard image synthesis benchmarks like CIFAR-10 and CelebA-64 while outperforming most existing VAE-based methods. Lastly, we show that the proposed method exhibits inherent generalization to different types of noise in the conditioning signal. For reproducibility, our source code is publicly available at https://github.com/kpandey008/DiffuseVAE.
翻译:已经展示了在几个竞争性图像合成基准上产生最先进的最新图像合成模型,但缺乏低维、可解释的潜在空间,而且生成过程缓慢。另一方面,标准的自动自动转换器(VAE)通常可以进入低维潜层空间,但样本质量较差。我们介绍了DiffuseVAE,这是一个将VAE纳入扩散模型框架的新颖的基因化框架,利用这一框架设计了传播模型的新颖的有条件参数化。我们显示,由此形成的模型为扩散模型配备了一个低维维维可解释的潜在代码,可用于下游任务,如可控制合成。拟议方法还改善了标准无条件DDPM/DDIM模型中显示的速度和质量权衡(例如,16.47国际开发公司对34.36使用CebebA-HQ-128基准的DDIM标准,使用T=10次逆向进程步骤),而没有明确为此目标进行培训。此外,拟议的模型合成质量可与州-VAE导导导出的VA-10级常规图像模型,而我们提出的通用的CIRA-10型常规合成方法则显示我们现有的标准格式。