Diffusion models have enabled high-quality, conditional image editing capabilities. We propose to expand their arsenal, and demonstrate that off-the-shelf diffusion models can be used for a wide range of cross-domain compositing tasks. Among numerous others, these include image blending, object immersion, texture-replacement and even CG2Real translation or stylization. We employ a localized, iterative refinement scheme which infuses the injected objects with contextual information derived from the background scene, and enables control over the degree and types of changes the object may undergo. We conduct a range of qualitative and quantitative comparisons to prior work, and exhibit that our method produces higher quality and realistic results without requiring any annotations or training. Finally, we demonstrate how our method may be used for data augmentation of downstream tasks.
翻译:传播模型使高品质、有条件的图像编辑能力得以实现。我们提议扩大其武库,并表明可以将现成的传播模型用于广泛的跨领域合成任务,其中包括图像混合、物体浸泡、纹理替换,甚至CG2Real翻译或文体化。我们采用一个局部、迭代的精细计划,将注入的物体与来自背景场景的背景资料混为一谈,从而能够控制该物体可能发生的变化的程度和类型。我们对先前的工作进行一系列定性和定量比较,并展示我们的方法产生更高质量和现实的结果,而不需要任何说明或培训。最后,我们展示了如何使用我们的方法来增加下游任务的数据。