VIP内容

题目: Diverse Image Generation via Self-Conditioned GANs

摘要:

本文介绍了一个简单但有效的无监督方法,以产生现实和多样化的图像,并且训练了一个类条件GAN模型,而不使用手动注释的类标签。相反,模型的条件是标签自动聚类在鉴别器的特征空间。集群步骤自动发现不同的模式,并显式地要求生成器覆盖它们。在标准模式基准测试上的实验表明,该方法在寻址模式崩溃时优于其他几种竞争的方法。并且该方法在ImageNet和Places365这样的大规模数据集上也有很好的表现,与以前的方法相比,提高了图像多样性和标准质量指标。

成为VIP会员查看完整内容
0
24

最新论文

Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing stationary learning environments. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator converges to replicate the real data distribution under mild assumptions. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across various benchmark datasets.

0
0
下载
预览
Top