Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
翻译:有条件的传播概率模型可以模拟自然图像的分布,并能够根据特定条件生成多样化和现实的样本。然而,其结果往往随着可见的颜色变化和质地而不切实际。我们认为,这一问题是模型所学的概率分布与自然图像分布之间的差别造成的。微妙条件在每次取样时间步骤中逐渐扩大了差异。为了解决这一问题,我们引入了一种新的方法,将预测的样本带入培训数据元中,使用预先训练的无条件传播模型。无条件模型起到调节作用,并减少每个取样步骤的有条件模型带来的差异。我们进行了全面实验,以展示我们在超分辨率、色化、分解动荡和图像分解任务方面的做法的有效性。我们的方法带来的改进表明,可将先前的样本纳入到改进有条件传播模型的一般插件中。