In this paper, we propose a novel loss function for training Generative Adversarial Networks (GANs) aiming towards deeper theoretical understanding as well as improved stability and performance for the underlying optimization problem. The new loss function is based on cumulant generating functions giving rise to \emph{Cumulant GAN}. Relying on a recently-derived variational formula, we show that the corresponding optimization problem is equivalent to R{\'e}nyi divergence minimization, thus offering a (partially) unified perspective of GAN losses: the R{\'e}nyi family encompasses Kullback-Leibler divergence (KLD), reverse KLD, Hellinger distance and $\chi^2$-divergence. Wasserstein GAN is also a member of cumulant GAN. In terms of stability, we rigorously prove the linear convergence of cumulant GAN to the Nash equilibrium for a linear discriminator, Gaussian distributions and the standard gradient descent ascent algorithm. Finally, we experimentally demonstrate that image generation is more robust relative to Wasserstein GAN and it is substantially improved in terms of both inception score and Fr\'echet inception distance when both weaker and stronger discriminators are considered.
翻译:在本文中,我们提出一个新的损失功能,用于培训“创造反逆网络”(GANs),目的是加深理论理解,提高基本优化问题的稳定性和性能。新的损失功能的基础是产生产生功能的累积产生功能,从而产生 emph{Cumulant GAN}。依靠最近产生的变式公式,我们表明相应的优化问题相当于R'e}尼差最小化,从而提供了一种(部分的)统一GAN损失的视角:R {e}nyi家族包括了库尔背利利利利利差差异(KLD)、逆向KLD、Hellinger距离和$\chi ⁇ 2$2$-diverence。Wasserstein GAN也是保利GAN的成员。在稳定方面,我们严格地证明占保利的GAN与纳什平衡的线性融合对于一个线性歧视者、高萨的分布和标准梯度下降算值。最后,我们实验性地表明,图像生成与瓦列尔斯坦根GAN的距离相比更加牢固,而且当开始以来,它都大大改进了。