Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.
翻译:标准传播模型通常要求数百个前方传送通过模型生成单一高不洁样本。我们引入了差异化扩散样板搜索(DDSS):一种通过抽样质量分数来优化任何预先培训的传播模型快速采样器的方法。我们还展示了通用高萨扩散模型(GGDM),一种灵活的非马尔科维安样本组,用于推广模型。我们表明,通过梯度下沉使样本质量分数最大化,从而优化GGDM采样员的自由度,从而提高样本质量分数。我们优化程序通过取样过程,利用再平衡技巧和梯度再物质化。DDSSS通过抽样质量分数来优化任何预先培训扩散模型的快速采样器(例如,LSUN教堂128x128的FID分数为11.6分,只有10个评分,而采用20个步骤,与51.1和14.9的样本质量分相比,与最强的DDPM/DDM/DIM的再校准前基准不需任何精确的调整)。