We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.
翻译:我们建议了一种根据人的指示编辑图像的方法:给一个输入图像和一个指示模型要做什么的书面指示,我们的模型遵循这些指示来编辑图像。为了获得这一问题的培训数据,我们将两个大型预先培训的模型 -- -- 一个语言模型(GPT-3)和一个文本到图像模型(Stavid Difulation) -- -- 的知识结合起来,以生成一个图像编辑示例的大型数据集。我们的有条件的传播模型,SpolentPix2Pix, 接受了关于我们生成的数据的培训,并概括了在推断时间对真实图像和用户写指令的概括。由于它进行前方传道中的编辑,而不需要按示例进行微调或反转,我们模型的编辑图像很快在几秒钟内进行。我们为各种输入图像和书面指令的收集展示了令人信服的编辑结果。