Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing stationary learning environments. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator converges to replicate the real data distribution under mild assumptions. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across various benchmark datasets.
翻译:最近,通过引入固定的学习环境,将基于转化的自监督自监督的学习应用到基因对抗网络(GANs)中,以减轻歧视者在歧视者中的灾难性遗忘;然而,在现有的自监督自监督的GANs中,单独自监督的任务导致一个与基因模型不一致的目标,因为自监督的分类器对发电机的分布是不可知的。为了解决这个问题,我们提议了一个新的自监督自监督的GAN(GAN),通过对数据转换进行自我监督的观察,使GAN(真实或假的)标签与自监督任务统一起来,从而减轻歧视者中的灾难性遗忘;然而,在现有的自监督自监督自监督的GAN(GAN)中,原始自监督的自我监督的分类器被统一到一个标签强化的模型中,因为自监督的分类器对发电机的分布和每个转换过程中的数据分配是不可分的。为了解决这个问题,我们提议了一个新型的自监督GAN(GAN),从理论上说,我们证明最佳的发电机在温理的假设下可以复制真实的数据分布。我们展示模型显示,拟议的方法大大超越了以前GAN(GAN)数据升级的升级和升级数据升级后,在GAN(GMS-A)的自我升级数据上进行自我升级。