The generative adversarial network (GAN) is a well-known model for learning high-dimensional distributions, but the mechanism for its generalization ability is not understood. In particular, GAN is vulnerable to the memorization phenomenon, the eventual convergence to the empirical distribution. We consider a simplified GAN model with the generator replaced by a density, and analyze how the discriminator contributes to generalization. We show that with early stopping, the generalization error measured by Wasserstein metric escapes from the curse of dimensionality, despite that in the long term, memorization is inevitable. In addition, we present a hardness of learning result for WGAN.
翻译:基因对抗网络(GAN)是学习高维分布的著名模式,但一般化能力的机制却不为人所知,特别是,GAN容易受到记忆化现象的影响,最终会与经验分布趋于一致。我们认为简化的GAN模式,由密度取代生成器,分析歧视者如何有助于概括化。我们表明,通过早期停止,Wasserstein指标测量的一般化错误从维度的诅咒中逃脱,尽管从长远来看,记忆化是不可避免的。此外,我们为WGAN展示了学习成果的难度。