We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model's strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and outdoor scenes, showing it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency. Further qualitative and quantitative experiments offer us an insight into the model's inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. An implementation of the model is available at https://github.com/dorarad/gansformer.
翻译:我们引入了GANsored, 这是一种新型和高效的变异器,并探索了它,以完成视觉化模型的任务。网络使用双面结构,使图像之间能够进行长距离互动,同时保持线性效率的计算,可以很容易地进行到高分辨率合成。它反复传播一系列潜在变数的信息,从一组潜在的变数到不断变化的视觉特征,反之亦然,以支持根据另一个变异而完善每个变异,并鼓励出现物体和场景的构成表现。它与经典变异器结构不同,它利用多种复制性整合,允许灵活的区域调节,从而可以被视为成功SyleGAN网络的概括化。我们通过对一系列数据集的仔细评估,从模拟多点环境到丰富的室内和室外真实世界,展示模型的强度和强健健性,显示其在图像质量和多样性方面达到最新水平的结果,同时享受快速学习和更高的数据效率。进一步的定性和定量实验为我们提供了对模型内部工作方式的洞察,展示了我们现有变现/变现方法的效益和更强性。