In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.
翻译:在本文中, 我们引入了一个新的任务 - 通过对话语言进行交互式图像编辑, 用户可以借此引导一个代理机构通过多转自然语言对话编辑图像。 在每次对话转弯时, 代理机构将源图像和自然语言描述作为输入, 并在文本描述之后生成新的图像。 为此任务引入了两个新的数据集, Zap- Seq 和 DeepFashion-Seq 。 我们对两个新数据集的实验显示, 拟议的 SeqAttnGAN 模型在基于对话的图像编辑任务上比 State- art (SOTA) 更符合基于对话的图像编辑任务。 详细量化评估和用户研究SOTA 的图像生成基准比 图像生成基准要有效。 详细量化评估和用户研究还显示, 在生成图像的模型上, 我们的图像的视觉一致性比图像的模型更加明显。