Research in vision-language models has seen rapid developments off-late, enabling natural language-based interfaces for image generation and manipulation. Many existing text guided manipulation techniques are restricted to specific classes of images, and often require fine-tuning to transfer to a different style or domain. Nevertheless, generic image manipulation using a single model with flexible text inputs is highly desirable. Recent work addresses this task by guiding generative models trained on the generic image datasets using pretrained vision-language encoders. While promising, this approach requires expensive optimization for each input. In this work, we propose an optimization-free method for the task of generic image manipulation from text prompts. Our approach exploits recent Latent Diffusion Models (LDM) for text to image generation to achieve zero-shot text guided manipulation. We employ a deterministic forward diffusion in a lower dimensional latent space, and the desired manipulation is achieved by simply providing the target text to condition the reverse diffusion process. We refer to our approach as LDEdit. We demonstrate the applicability of our method on semantic image manipulation and artistic style transfer. Our method can accomplish image manipulation on diverse domains and enables editing multiple attributes in a straightforward fashion. Extensive experiments demonstrate the benefit of our approach over competing baselines.
翻译:在视觉语言模型的研究中,可以看到快速的发展,使天然语言界面能够产生和操纵图像。许多现有的文本指导操作技术仅限于特定的图像类别,往往需要微调才能转移到不同的样式或领域。然而,使用单一模型和灵活的文本投入进行一般图像操纵是非常可取的。最近的工作通过指导通用图像数据集培训的基因化模型来完成这项任务,这些模型使用经过预先培训的视觉语言编码器对通用图像数据集进行了培训。这个方法很有希望,但需要为每项输入进行费用优化。在这项工作中,我们建议一种最优化的方法,用于从文本提示中进行通用图像操纵。我们的方法利用了最近的低端传播模型(LDM)来制作图像,以达到零光发文本指导操作。我们在低维隐蔽空间中采用了一种决定性的前瞻性前向扩散,而想要的操纵是通过仅仅提供目标文本来决定反向传播过程。我们称之为LDEdidit。我们的方法在语义图像操纵和艺术风格转换上表现出了我们的方法的适用性。我们的方法可以完成对不同区域图像的图像操纵,并且能够通过直截面的实验来修改多种属性的基线。