Inversion-based visual editing provides an effective and training-free way to edit an image or a video based on user instructions. Existing methods typically inject source image information during the sampling process to maintain editing consistency. However, this sampling strategy overly relies on source information, which negatively affects the edits in the target image (e.g., failing to change the subject's atributes like pose, number, or color as instructed). In this work, we propose ProEdit to address this issue both in the attention and the latent aspects. In the attention aspect, we introduce KV-mix, which mixes KV features of the source and the target in the edited region, mitigating the influence of the source image on the editing region while maintaining background consistency. In the latent aspect, we propose Latents-Shift, which perturbs the edited region of the source latent, eliminating the influence of the inverted latent on the sampling. Extensive experiments on several image and video editing benchmarks demonstrate that our method achieves SOTA performance. In addition, our design is plug-and-play, which can be seamlessly integrated into existing inversion and editing methods, such as RF-Solver, FireFlow and UniEdit.
翻译:基于反演的视觉编辑提供了一种无需训练且高效的方式,根据用户指令对图像或视频进行编辑。现有方法通常在采样过程中注入源图像信息以保持编辑一致性。然而,这种采样策略过度依赖源信息,会对目标图像的编辑产生负面影响(例如,无法按照指令改变主体的姿态、数量或颜色等属性)。在本工作中,我们提出ProEdit,从注意力机制和潜在表示两方面解决这一问题。在注意力方面,我们引入KV-mix,在编辑区域混合源图像与目标图像的键值特征,从而在保持背景一致性的同时,减轻源图像对编辑区域的影响。在潜在表示方面,我们提出Latents-Shift,通过对源潜在表示中编辑区域进行扰动,消除反演潜在表示对采样的影响。在多个图像和视频编辑基准上的大量实验表明,我们的方法达到了最先进的性能。此外,我们的设计是即插即用的,可以无缝集成到现有的反演与编辑方法中,例如RF-Solver、FireFlow和UniEdit。