In this paper, we introduce novel lightweight generative adversarial networks, which can effectively capture long-range dependencies in the image generation process, and produce high-quality results with a much simpler architecture. To achieve this, we first introduce a long-range module, allowing the network to dynamically adjust the number of focused sampling pixels and to also augment sampling locations. Thus, it can break the limitation of the fixed geometric structure of the convolution operator, and capture long-range dependencies in both spatial and channel-wise directions. Also, the proposed long-range module can highlight negative relations between pixels, working as a regularization to stabilize training. Furthermore, we propose a new generation strategy through which we introduce metadata into the image generation process to provide basic information about target images, which can stabilize and speed up the training process. Our novel long-range module only introduces few additional parameters and is easily inserted into existing models to capture long-range dependencies. Extensive experiments demonstrate the competitive performance of our method with a lightweight architecture.
翻译:在本文中,我们引入了新型的轻量级基因对抗网络,这些网络能够有效捕捉图像生成过程中的长距离依赖性,并产生高质量的成果,而其结构要简单得多。为了实现这一点,我们首先引入了一个长距离模块,让网络能够动态地调整重点抽样像素的数量,同时扩大取样地点。因此,它可以打破对卷发操作员固定几何结构的限制,捕捉空间和频道方向上的长距离依赖性。此外,拟议的长距离模块可以突出像素之间的负面关系,作为稳定培训的正规化。此外,我们提出了新一代战略,通过这一战略,我们将元数据引入图像生成过程,以提供有关目标图像的基本信息,从而稳定并加快培训过程。我们的新长距离模块只引入了少数额外的参数,而且很容易插入到现有的模型中以捕捉长距离依赖性。广泛的实验展示了我们方法在轻度结构下的竞争性表现。