Generative adversarial networks (GANs) have achieved impressive performance in data synthesis and have driven the development of many applications. However, GANs are known to be hard to train due to their bilevel objective, which leads to the problems of convergence, mode collapse, and gradient vanishing. In this paper, we propose a new generative model called the generative adversarial NTK (GA-NTK) that has a single-level objective. The GA-NTK keeps the spirit of adversarial learning (which helps generate plausible data) while avoiding the training difficulties of GANs. This is done by modeling the discriminator as a Gaussian process with a neural tangent kernel (NTK-GP) whose training dynamics can be completely described by a closed-form formula. We analyze the convergence behavior of GA-NTK trained by gradient descent and give some sufficient conditions for convergence. We also conduct extensive experiments to study the advantages and limitations of GA-NTK and propose some techniques that make GA-NTK more practical.
翻译:创世对抗网络(GANs)在数据综合方面取得了令人印象深刻的成绩,推动了许多应用的开发,然而,据知GANs由于双重目标而难以培训,这导致了趋同、模式崩溃和梯度消失的问题,在本文件中,我们提出了一个新的基因模型,称为具有单一目标的GA-NTK(GA-NTK),它保持了对抗学习(有助于产生可信的数据)的精神,同时避免了GANs的培训困难,这是通过用一个封闭式公式来全面描述其培训动态的纳古斯内尔(NTK-GP)模拟歧视者过程,我们分析了GA-NTK(GA-NTK)的趋同行为,并为趋同提供了一些充分的条件,我们还进行了广泛的实验,以研究GA-NTK的优势和局限性,并提出一些使GA-NTK更加实用的技术。