Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement.
翻译:生成对抗性网络(GAN)是作为发电机(G)和导师(D)之间的双人游戏而形成的,D被要求区分图像是来自真实数据还是由G产生的。在这种配方下,D作为规则制定者,因此倾向于支配竞争。为了在GANs中更公平的游戏,我们提出了一个新的对抗性培训模式,使G也给D分配任务。具体地说,根据一个图像,我们期望D提取代表特征,这些特征可以由G充分解码,以重建输入。通过这种方式,D不是自由学习,而是被敦促与G的观点保持一致,以便进行域分类。各种数据集的实验结果表明,我们的方法大大优于基线。例如,我们改进SUN Bedroom上StyleGAN2的FID从4.30到2.55,LSUN Bedroom和LSUN Church从4.04到2.82。我们认为,这项工作中的开创性尝试可以激励社区,为GAN改进设计更好的发电机领导任务。