We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling. The network incorporates strong and explicit structural priors, to reflect the compositional nature of visual scenes, and synthesizes images through a sequential process. It operates in two stages: a fast and lightweight planning phase, where we draft a high-level scene layout, followed by an attention-based execution phase, where the layout is being refined, evolving into a rich and detailed picture. Our model moves away from conventional black-box GAN architectures that feature a flat and monolithic latent space towards a transparent design that encourages efficiency, controllability and interpretability. We demonstrate GANformer2's strengths and qualities through a careful evaluation over a range of datasets, from multi-object CLEVR scenes to the challenging COCO images, showing it successfully achieves state-of-the-art performance in terms of visual quality, diversity and consistency. Further experiments demonstrate the model's disentanglement and provide a deeper insight into its generative process, as it proceeds step-by-step from a rough initial sketch, to a detailed layout that accounts for objects' depths and dependencies, and up to the final high-resolution depiction of vibrant and intricate real-world scenes. See https://github.com/dorarad/gansformer for model implementation.
翻译:我们引入了GANEXU2模型,这是一个迭代的面向物体的变压器,用于执行基因模型的任务。网络包含强大和明确的结构性前置,以反映视觉场景的构成性质,并通过一个相继过程合成图像。它分两个阶段运行:快速和轻量级的规划阶段,我们在此阶段起草一个高层次的场景布局,随后是基于关注的执行阶段,其布局正在完善,演变成一个丰富和详细的图象。我们的模式从传统的黑盒子GAN结构向一个具有平坦和单一的潜在空间的透明设计转变,该结构将鼓励效率、可控性和可解释性。我们通过对一系列数据集的认真评估,从多点CLEVR场景到具有挑战性的COCO图像,展示它成功地在视觉质量、多样性和一致性方面达到了最先进的业绩。进一步实验显示了模型的分解,并更深入地揭示了它的基因化过程,因为它从粗略的初步草图、可控性和可解释性。我们展示了GANEX2的优势和高清晰度的图像,然后又详细地描述了地描述了地描绘了真实的图像。