The crux of text-to-image synthesis stems from the difficulty of preserving the cross-modality semantic consistency between the input text and the synthesized image. Typical methods, which seek to model the text-to-image mapping directly, could only capture keywords in the text that indicates common objects or actions but fail to learn their spatial distribution patterns. An effective way to circumvent this limitation is to generate an image layout as guidance, which is attempted by a few methods. Nevertheless, these methods fail to generate practically effective layouts due to the diversity of input text and object location. In this paper we push for effective modeling in both text-to-layout generation and layout-to-image synthesis. Specifically, we formulate the text-to-layout generation as a sequence-to-sequence modeling task, and build our model upon Transformer to learn the spatial relationships between objects by modeling the sequential dependencies between them. In the stage of layout-to-image synthesis, we focus on learning the textual-visual semantic alignment per object in the layout to precisely incorporate the input text into the layout-to-image synthesizing process. To evaluate the quality of generated layout, we design a new metric specifically, dubbed Layout Quality Score, which considers both the absolute distribution errors of bounding boxes in the layout and the mutual spatial relationships between them. Extensive experiments on three datasets demonstrate the superior performance of our method over state-of-the-art methods on both predicting the layout and synthesizing the image from the given text.
翻译:文本到图像合成的柱石源于难以保持输入文本和合成图像之间的跨模式语义一致性。 典型的方法试图直接模拟文本到图像的映射,但只能捕捉文本中显示共同对象或行动的关键字, 但却没有学习它们的空间分布模式。 绕开这一限制的有效方法就是生成图像布局作为指导, 这是少数方法尝试的。 然而, 由于输入文本和对象位置的多样性, 这些方法无法产生实际有效的布局。 在本文中, 我们推力在文本到布局生成和布局到图像合成中进行有效的建模。 具体地说, 我们将文本到布局生成作为顺序到图像映射的模型, 并且用变动器来学习天体之间的空间关系, 在布局到图像合成的合成阶段, 我们侧重于在布局上给出的文本到准确的图像到将输入文本纳入到 文本的文本中, 具体地将图像的布局和布局的布局的布局的布局的绝对质量关系 。