Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a theoretically motivated penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.
翻译:改善GANs稳定性和性能的一些最受欢迎的方法涉及限制或规范歧视者。在本文中,我们认为一种基本上被忽视的正规化技术,我们称之为AdvAs。我们用不同的观点来激励这种技术与先前工作的不同观点。具体地说,我们考虑理论分析与实践之间的普遍不匹配:分析往往假设歧视者在每种迭代上达到最佳效果。在实践中,这实际上永远不会发生,往往导致发电机的梯度估计差。解决这个问题,AdvAs是根据用于培训歧视者的梯度的规范从理论上对发电机施加的惩罚。这鼓励发电机向最优化的点移动。我们展示了将AdvAs应用于GAN目标、数据集和网络结构的效果。结果显示,理论与实践之间的不匹配减少了,AdvAs可以导致改进以FID分数衡量的GAN培训。