Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation.
翻译:大型文本制扩散模型,如 DALLE-2, 能够生成惊人的光学现实图像, 并给出自然语言描述。 虽然这些模型非常灵活, 但很难理解某些概念的构成, 例如混淆不同对象的属性或对象之间的关系。 在本文中, 我们建议了一种结构化的替代方法, 使用扩散模型来生成成像。 图像是由一组扩散模型生成的, 每个模型都建模图像的某个组成部分。 为此, 我们将传播模型解释为基于能量的模型, 可以将能源函数定义的数据分布明确结合起来。 提议的方法可以在测试时产生远比培训时要复杂的场景, 包含句子描述、 对象关系、 人类面部属性, 甚至可以概括到在现实世界中很少见的新组合。 我们进一步说明我们的方法如何用来构建事先经过培训的文本制导扩散模型, 并生成包含输入描述中描述的所有细节的光现实化图像, 包括给 DALLE-2 所显示的某些对象属性的结合性。 这些结果指促进结构生成的视觉方法的有效性 。