We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D. Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.
翻译:我们提出一个新的常规化方法,以改善对基因反转网络(GANs)的培训。其动机是当歧视者D以正确的方式传播其模型能力时,提供给生成者G的学习信号更加丰富和多样。这反过来又有助于G更好地探索和发现真实数据,同时避免因D错误的外推而导致的巨大不稳定跳跃。我们的常规化程序引导校正歧视者D更好地分配其模型能力,鼓励在选定的D内部层次上采用二进制激活模式,以产生高联合加密率。合成数据和真实数据集的实验结果显示GAN培训的稳定性和趋同速度以及更高的样本质量。这种方法还导致半监督学习中的更高分类理解度。