Recent studies have demonstrated that the forward diffusion process is crucial for the effectiveness of diffusion models in terms of generative quality and sampling efficiency. We propose incorporating an analytical image attenuation process into the forward diffusion process for high-quality (un)conditioned image generation with significantly fewer denoising steps compared to the vanilla diffusion model requiring thousands of steps. In a nutshell, our method represents the forward image-to-noise mapping as simultaneous \textit{image-to-zero} mapping and \textit{zero-to-noise} mapping. Under this framework, we mathematically derive 1) the training objectives and 2) for the reverse time the sampling formula based on an analytical attenuation function which models image to zero mapping. The former enables our method to learn noise and image components simultaneously which simplifies learning. Importantly, because of the latter's analyticity in the \textit{zero-to-image} sampling function, we can avoid the ordinary differential equation-based accelerators and instead naturally perform sampling with an arbitrary step size. We have conducted extensive experiments on unconditioned image generation, \textit{e.g.}, CIFAR-10 and CelebA-HQ-256, and image-conditioned downstream tasks such as super-resolution, saliency detection, edge detection, and image inpainting. The proposed diffusion models achieve competitive generative quality with much fewer denoising steps compared to the state of the art, thus greatly accelerating the generation speed. In particular, to generate images of comparable quality, our models require only one-twentieth of the denoising steps compared to the baseline denoising diffusion probabilistic models. Moreover, we achieve state-of-the-art performances on the image-conditioned tasks using only no more than 10 steps.
翻译:暂无翻译