Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/
翻译:DALLE-2等大型文本制化扩散模型能够产生惊人的光学现实图像,并给出自然语言描述。这些模型非常灵活,但很难理解某些概念的构成,例如混淆不同对象的属性或不同对象之间的关系。在本文中,我们建议了一种结构化的替代方法,用于使用扩散模型进行合成的生成。一种图像是编集一套扩散模型,每个模型都建模图像的某些组成部分。为此,我们将传播模型解释为基于能源的模型,其中可以明确结合能源函数定义的数据分布。拟议的方法可以在测试时产生比培训中看到的要复杂得多的场景,包含句子描述、对象关系、人类面部属性,甚至概括到在现实世界中很少见的新的组合。我们进一步说明我们的方法如何用来组成一套经过预先训练的文本制导导扩散模型,并产生包含输入描述中描述的所有细节的光化现实化图像,包括给 DALLE-2 所显示的某些对象属性的结合性。这些结果指点与培训时所看到的场景、 描述的G-Commacial-Comimalimal imalimal-motional-commationalization