Diffusion models have shown remarkable capabilities in generating high quality and creative images conditioned on text. An interesting application of such models is structure preserving text guided image editing. Existing approaches rely on text conditioned diffusion models such as Stable Diffusion or Imagen and require compute intensive optimization of text embeddings or fine-tuning the model weights for text guided image editing. We explore text guided image editing with a Hybrid Diffusion Model (HDM) architecture similar to DALLE-2. Our architecture consists of a diffusion prior model that generates CLIP image embedding conditioned on a text prompt and a custom Latent Diffusion Model trained to generate images conditioned on CLIP image embedding. We discover that the diffusion prior model can be used to perform text guided conceptual edits on the CLIP image embedding space without any finetuning or optimization. We combine this with structure preserving edits on the image decoder using existing approaches such as reverse DDIM to perform text guided image editing. Our approach, PRedItOR does not require additional inputs, fine-tuning, optimization or objectives and shows on par or better results than baselines qualitatively and quantitatively. We provide further analysis and understanding of the diffusion prior model and believe this opens up new possibilities in diffusion models research.
翻译:在制作以文本为条件的高质量和创造性图像方面,传播模型显示了非凡的能力。这种模型的一个有趣的应用是结构保存文本引导图像编辑。现有的方法依靠Sclock Difulation 或imagen 等以文本为条件的传播模型,要求对文本嵌入进行精密优化或微调模型重量,用于文本引导图像编辑。我们探索了文本引导图像编辑,并使用类似于DALLE-2的混合传播模型(HDM)架构。我们的建筑包括一种先前的传播模型,该模型生成CLIP图像,以文本提示为条件,而自定义的Lentntent Difulation模型,培训以CLIP图像嵌入为条件制作图像。我们发现,先前的传播模型可用于在CLIP图像嵌入空间上进行文本引导的概念编辑,而不作任何微调或优化。我们将此与结构组合在一起,使用反向DDIM等现有方法来进行文本引导图像编辑。我们的方法是,PRedITO不需要额外的投入、微调、优化或目标,并显示比基线质量和定量的可能性进一步理解和定量模型。我们提供了进一步的分析和定量理解。