Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.
翻译:扩散去噪模型(DDM)因其卓越的生成质量和多样性而受到关注。这一成功主要归功于使用基于类或基于文本的扩散指导方法,例如分类器和无分类器指导。在本文中,我们提出了一个更全面的观点,超越了传统的指导方法。从这个广义的角度,我们引入了新颖的无条件和无训练策略,以增强生成图像的质量。作为一个简单的解决方案,模糊引导提高了中间样本的适用性,使扩散模型能够以适当的指导尺度生成更高质量的样本。在此基础上,自注意引导(SAG)利用扩散模型的中间自注意力图来增强其稳定性和效力。具体来说,SAG仅对扩散模型在每次迭代中注意到的区域进行对抗性模糊处理,并相应地指导它们。我们的实验结果表明,我们的SAG提高了各种扩散模型的性能,包括ADM、IDDPM、Stable Diffusion和DiT等。此外,将SAG与传统的指导方法结合使用会进一步改进。