Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.
翻译:最近,通过引入一个固定的学习环境,将基于转化的自监督自监督的自监督学习应用到基因对抗网络(GANs)中,以减轻歧视者灾难性的遗忘;然而,在现有的自监督自监督自监督的GANs中,单独自监督的任务导致一个与基因模型不一致的目标,因为自监督的分类器对发电机的分布具有不可知性。为了解决这个问题,我们提议了一个新的自监督自监督GAN(GANs)任务与自监督的任务相结合,通过对数据转换进行自我监督的观察来扩大GAN标签(真实或假的)。具体地说,最初的自监督式分类器被统一成一个标签强化的模型,以了解发电机的分布和每个转换过程中的数据分配情况,然后提供它们之间的差异,以优化发电机。理论上,我们证明最佳的发电机能够通过对真实的数据分配进行复制。我们假设的是,拟议的方法大大超越了先前的GAN(GAAN)升级数据基准。