The conventional understanding of adversarial training in generative adversarial networks (GANs) is that the discriminator is trained to estimate a divergence, and the generator learns to minimize this divergence. We argue that despite the fact that many variants of GANs were developed following this paradigm, the current theoretical understanding of GANs and their practical algorithms are inconsistent. In this paper, we leverage Wasserstein gradient flows which characterize the evolution of particles in the sample space, to gain theoretical insights and algorithmic inspiration of GANs. We introduce a unified generative modeling framework - MonoFlow: the particle evolution is rescaled via a monotonically increasing mapping of the log density ratio. Under our framework, adversarial training can be viewed as a procedure first obtaining MonoFlow's vector field via training the discriminator and the generator learns to draw the particle flow defined by the corresponding vector field. We also reveal the fundamental difference between variational divergence minimization and adversarial training. This analysis helps us to identify what types of generator loss functions can lead to the successful training of GANs and suggest that GANs may have more loss designs beyond the literature (e.g., non-saturated loss), as long as they realize MonoFlow. Consistent empirical studies are included to validate the effectiveness of our framework.
翻译:在基因对抗网络(GANs)中,传统对对抗性训练的传统理解是,歧视者受过训练,可以估计差异,而产生者学会了尽量减少差异。我们争辩说,尽管许多GAN的变种是根据这种范式发展起来的,但目前对GANs及其实际算法的理论理解并不一致。在本文中,我们利用瓦塞尔斯坦梯度流作为样本空间粒子演变的特点,以获得GANs的理论见解和算法灵感。我们引入了一个统一的基因模型框架-莫诺弗罗:粒子的演变是通过单质增加的对日志密度比率的绘图而重新分类的。在我们的框架内,对抗性培训可以被视为首先通过培训歧视者获得OnoFlow矢量场的程序,而生成者则学会吸收相应矢量场定义的粒子流。我们还揭示了不同差异最小化与对抗性培训之间的根本差异。我们的分析有助于我们确定哪种发电机损失功能能够导致GANs的成功培训,并表明GANs的演算过程可能有更多的损失设计,作为我们长期的实验性框架,包括了我们长期的实验性研究。