We introduce a new image editing and synthesis framework, Stochastic Differential Editing (SDEdit), based on a recent generative model using stochastic differential equations (SDEs). Given an input image with user edits (e.g., hand-drawn color strokes), we first add noise to the input according to an SDE, and subsequently denoise it by simulating the reverse SDE to gradually increase its likelihood under the prior. Our method does not require task-specific loss function designs, which are critical components for recent image editing methods based on GAN inversion. Compared to conditional GANs, we do not need to collect new datasets of original and edited images for new applications. Therefore, our method can quickly adapt to various editing tasks at test time without re-training models. Our approach achieves strong performance on a wide range of applications, including image synthesis and editing guided by stroke paintings and image compositing.
翻译:我们引入了新的图像编辑和合成框架,即Stochatic 差异编辑(SDEdit ), 其依据是使用随机差异方程式(SDEs)的最近基因模型。 鉴于用户编辑(如手画色彩中风)的输入图像, 我们首先根据 SDE 将噪音添加到输入中, 然后通过模拟反向SDE 逐渐增加在先前情况下的可能性而使其隐蔽。 我们的方法不需要任务特定损失功能设计, 它们是基于 GAN 转换的最近图像编辑方法的关键组成部分。 与条件性GAN 相比, 我们不需要为新应用程序收集原始和编辑图像的新数据集。 因此, 我们的方法可以在测试时快速适应各种编辑任务, 而无需再培训模型。 我们的方法在广泛的应用中取得了强大的性能, 包括由手动绘画和图像组合所引导的图像合成和编辑。