Recent text-to-image diffusion models are able to generate convincing results of unprecedented quality. However, it is nearly impossible to control the shapes of different regions/objects or their layout in a fine-grained fashion. Previous attempts to provide such controls were hindered by their reliance on a fixed set of labels. To this end, we present SpaText - a new method for text-to-image generation using open-vocabulary scene control. In addition to a global text prompt that describes the entire scene, the user provides a segmentation map where each region of interest is annotated by a free-form natural language description. Due to lack of large-scale datasets that have a detailed textual description for each region in the image, we choose to leverage the current large-scale text-to-image datasets and base our approach on a novel CLIP-based spatio-textual representation, and show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-based. In addition, we show how to extend the classifier-free guidance method in diffusion models to the multi-conditional case and present an alternative accelerated inference algorithm. Finally, we offer several automatic evaluation metrics and use them, in addition to FID scores and a user study, to evaluate our method and show that it achieves state-of-the-art results on image generation with free-form textual scene control.
翻译:最近的文本到图像扩散模型能够生成前所未有的高质量令人信服的结果。然而,在微观层面上几乎无法精细控制不同区域/对象的形状或其布局。以往提供这种控制的尝试是受到其依赖固定标签集的限制。为此,我们提出了 SpaText - 一种使用开放式词汇场景控制的文本到图像生成新方法。除了描述整个场景的全局文本提示外,用户还需要提供一个分割地图,在该地图中,每个感兴趣区域都用自由格式的自然语言描述进行注释。由于缺乏具有详细文本描述的大规模数据集,我们选择利用当前的大规模文本到图像数据集,并基于一种新颖的基于 CLIP 的空间文本表示,展示了其在两种最先进的扩散模型:基于像素和基于潜在变量的模型中的有效性。此外,我们还展示了如何将扩散模型中的无分类器引导方法扩展到多条件情况,并呈现了一种替代的加速推断算法。最后,我们提供了几个自动评估指标,并使用它们,以及 FID 分数和用户研究,评估了我们的方法,并证明它在具有自由格式文本场景控制的图像生成方面取得了最先进的结果。