Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.
翻译:图像生成最近出现了巨大的进步, 其传播模型允许将令人信服的图像合成为各种文本提示。 在文章中, 我们提议 DiffEdit, 这是一种利用文本附加条件的传播模型来完成语义图像编辑任务的方法, 目的是根据文字查询编辑图像。 语义图像编辑是图像生成的延伸, 额外的限制是生成的图像应该尽可能与特定输入图像相似。 目前基于传播模型的编辑方法通常需要提供遮罩, 通过将任务作为有条件的油漆任务来让任务更容易处理。 相比之下, 我们的主要贡献能够自动生成一个遮罩, 突出需要编辑的输入图像区域, 对比以不同文本提示为条件的传播模型的预测。 此外, 我们依靠潜在推论来保存这些感兴趣的区域的内容, 并显示基于遮罩的传播的极好协同作用。 DiffEdit 通常需要在图像网络上提供最先进的编辑性表现。 此外, 我们用基于COs生成的图像作为文本的图像, 来评估在更具挑战性的环境中进行语义的图像编辑。