Recent works on diffusion models have demonstrated a strong capability for conditioning image generation, e.g., text-guided image synthesis. Such success inspires many efforts trying to use large-scale pre-trained diffusion models for tackling a challenging problem--real image editing. Works conducted in this area learn a unique textual token corresponding to several images containing the same object. However, under many circumstances, only one image is available, such as the painting of the Girl with a Pearl Earring. Using existing works on fine-tuning the pre-trained diffusion models with a single image causes severe overfitting issues. The information leakage from the pre-trained diffusion models makes editing can not keep the same content as the given image while creating new features depicted by the language guidance. This work aims to address the problem of single-image editing. We propose a novel model-based guidance built upon the classifier-free guidance so that the knowledge from the model trained on a single image can be distilled into the pre-trained diffusion model, enabling content creation even with one given image. Additionally, we propose a patch-based fine-tuning that can effectively help the model generate images of arbitrary resolution. We provide extensive experiments to validate the design choices of our approach and show promising editing capabilities, including changing style, content addition, and object manipulation. The code is available for research purposes at https://github.com/zhang-zx/SINE.git .
翻译:最近关于传播模型的著作表明,在调节图像生成方面,例如,文本制成图像合成,具有很强的能力。这种成功激励了许多努力,试图使用大规模预先培训的传播模型处理具有挑战性的问题-现实图像编辑。在这一领域开展的工作学习了一种独特的文本符号,与包含同一对象的若干图像相对应。然而,在许多情况下,只有一张图像,如 " 女孩与珍珠穿刺 " 的画。利用现有的微调对经过训练的传播模型进行微调的工作,造成严重的超标问题。预先培训的传播模型的信息泄漏使得编辑无法在创建语言指南所描述的新特征的同时保持与给定图像相同的内容。这项工作旨在解决单一图像编辑问题。我们提议了一个基于新颖的模型指导,以单一图像上受过训练的《女孩与珍珠穿刺》的画画。利用现有的图像,我们提议了一种基于精确的微调的微调微调,即使能够创建内容。此外,我们提议了能够有效帮助模型生成任意分辨率的图像的版本。我们提出了一种基于高调的模型的模型设计方法,我们可以进行广泛的实验,以显示有希望的校正的校正。