Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.
翻译:近年来,基因对抗网络(GANs)取得了快速进展,但是,GAN模式的成功取决于大量的培训数据。这项工作提议了一种正规化的方法,用于在有限数据方面培训强大的GAN模型。从理论上讲,我们显示了常规化损失与称为LeCam-divegence(在有限的培训数据下,我们发现它更加强大)之间的联系。关于几个基准数据集的广泛实验表明,拟议的规范化计划1 提高了通用性能,稳定了在有限的培训数据条件下GAN模型的学习动态;2 补充了最近的数据增强方法。这些特性有助于培训GAN模型,以便在只有有限的图像网络基准培训数据的情况下实现最先进的业绩。