Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data. Despite their empirical success, the training of GANs is not fully understood due to the min-max optimization of the generator and discriminator. This paper analyzes these joint dynamics when the true samples, as well as the generated samples, are discrete, finite sets, and the discriminator is kernel-based. A simple yet expressive framework for analyzing training called the $\textit{Isolated Points Model}$ is introduced. In the proposed model, the distance between true samples greatly exceeds the kernel width, so each generated point is influenced by at most one true point. Our model enables precise characterization of the conditions for convergence, both to good and bad minima. In particular, the analysis explains two common failure modes: (i) an approximate mode collapse and (ii) divergence. Numerical simulations are provided that predictably replicate these behaviors.
翻译:生成自变网络( GANs) 是一个广泛使用的复杂数据基因模型模型工具。 尽管它们的经验成功, 但是由于生成器和导体的微量最大优化, GANs 的培训并没有充分理解。 本文分析了当真实样本以及生成的样本是离散的、 有限的数据集和导体以内核为基础时这些联合动态。 引入了一个简单而又清晰的分析培训分析框架, 名为 $\ textit{ 孤立点模型} $。 在拟议的模型中, 真实样本之间的距离大大超过内核宽度, 所以每个生成点都受到最多一个真实点的影响。 我们的模型可以精确地描述趋同条件, 包括好的和坏的微型。 特别是, 分析解释了两种常见的失败模式:(i) 大致模式崩溃和(ii) 差异。 提供数字模拟, 可以预测地复制这些行为。