Text-conditioned image editing has recently attracted considerable interest. However, most methods are currently either limited to specific editing types (e.g., object overlay, style transfer), or apply to synthetically generated images, or require multiple input images of a common object. In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. -- each within its single high-resolution natural image provided by the user. Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object). Our method, which we call "Imagic", leverages a pre-trained text-to-image diffusion model for this task. It produces a text embedding that aligns with both the input image and the target text, while fine-tuning the diffusion model to capture the image-specific appearance. We demonstrate the quality and versatility of our method on numerous inputs from various domains, showcasing a plethora of high quality complex semantic image edits, all within a single unified framework.
翻译:文本调整后的图像编辑最近引起了相当大的兴趣。 但是,大多数方法目前要么局限于特定的编辑类型(例如,对象覆盖、样式传输),要么适用于合成生成的图像,或者需要多个共同对象的输入图像。在本文件中,我们第一次展示了将复杂(例如,非硬化)文本引导的语义编辑应用到单一真实图像的能力。例如,我们可以改变图像中一个或多个对象的姿态和构成,同时保存其原始特性。我们的方法可以使常备狗坐下或跳跃,让鸟儿张开翅膀等 -- -- 每一个都在用户提供的单一高分辨率自然图像中。与以前的工作相反,我们提议的方法只需要单一输入图像和目标文本(想要的编辑)。它以真实图像操作,不需要任何额外的投入(例如图像遮罩或对象的额外视图) 。我们称之为“ 想象” 的方法, 利用一个经过预先训练的文本到图像扩散模型, 使一个鸟儿张翅膀等 -- -- 每一个都是在用户提供的单一高分辨率自然图像中。 与以往的工作相反, 我们所提议的方法只需要一个单一输入图像和目标, 同时显示我们各种特定格式的图像的缩化方法。