Due to the outstanding capability for data generation, Generative Adversarial Networks (GANs) have attracted considerable attention in unsupervised learning. However, training GANs is difficult, since the training distribution is dynamic for the discriminator, leading to unstable image representation. In this paper, we address the problem of training GANs from a novel perspective, \emph{i.e.,} robust image classification. Motivated by studies on robust image representation, we propose a simple yet effective module, namely AdaptiveMix, for GANs, which shrinks the regions of training data in the image representation space of the discriminator. Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples. The hard samples are constructed by mixing a pair of training images. We evaluate the effectiveness of our AdaptiveMix with widely-used and state-of-the-art GAN architectures. The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples. We also show that our AdaptiveMix can be further applied to image classification and Out-Of-Distribution (OOD) detection tasks, by equipping it with state-of-the-art methods. Extensive experiments on seven publicly available datasets show that our method effectively boosts the performance of baselines. The code is publicly available at https://github.com/WentianZhang-ML/AdaptiveMix.
翻译:由于在数据生成方面的卓越能力,生成式对抗网络(GAN)在无监督学习中引起了极大的关注。然而,GAN的训练很困难,因为对于判别器,训练分布是动态的,这导致图像表征不稳定。在本文中,我们从一个新的角度解决了GAN的训练问题,即鲁棒图像分类。在鲁棒图像表征的研究的启发下,我们为GAN提出了一种简单而有效的模块,即AdaptiveMix,它可以在判别器中收缩训练数据的图像表征空间。考虑到直接约束特征空间是难以实现的,我们提出构建硬样本并缩小硬样本和易样本之间的特征距离。硬样本是通过混合一对训练图像来构建的。我们评估了我们的AdaptiveMix在广泛使用的和最先进的GAN架构上的效果。评估结果表明,我们的AdaptiveMix能促进GAN的训练,有效提高生成样本的图像质量。我们还表明,通过将最先进的方法与AdaptiveMix结合,可以将其进一步应用于图像分类和OOD检测任务。对七个公开可用数据集进行的大量实验显示,我们的方法有效地提升了基准的性能。代码公开于https://github.com/WentianZhang-ML/AdaptiveMix。