Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images. Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting, the underlying cause that impedes the generator's convergence. This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator. As an alternative method to existing approaches that rely on standard data augmentations or model regularization, APA alleviates overfitting by employing the generator itself to augment the real data distribution with generated images, which deceives the discriminator adaptively. Extensive experiments demonstrate the effectiveness of APA in improving synthesis quality in the low-data regime. We provide a theoretical analysis to examine the convergence and rationality of our new training strategy. APA is simple and effective. It can be added seamlessly to powerful contemporary GANs, such as StyleGAN2, with negligible computational cost.
翻译:最近的研究表明,由于差异性过大,对数据有限的全球网络的培训仍然十分艰巨,这是妨碍产生者趋同的根本原因。本文介绍了一种新颖的战略,称为“适应性优于增强(APA)”,以鼓励产生者与歧视者之间的健康竞争。作为现有方法的一种替代方法,依靠标准数据增强或模型规范化,利用生成者本身来利用生成的图像来增加真实数据传播,从而减轻了过度的适应性,这些图像欺骗了歧视者。广泛的实验表明,APA在提高低数据制度中的合成质量方面的有效性。我们提供了理论分析,以审查我们的新培训战略的趋同性和合理性。APA是简单而有效的,可以顺畅地添加到强大的当代GANs,例如SyleGAN2, 其计算成本微不足道。