Generative adversarial networks (GANs) are among the most successful models for learning high-complexity, real-world distributions. However, in theory, due to the highly non-convex, non-concave landscape of the minmax training objective, GAN remains one of the least understood deep learning models. In this work, we formally study how GANs can efficiently learn certain hierarchically generated distributions that are close to the distribution of images in practice. We prove that when a distribution has a structure that we refer to as Forward Super-Resolution, then simply training generative adversarial networks using gradient descent ascent (GDA) can indeed learn this distribution efficiently, both in terms of sample and time complexities. We also provide concrete empirical evidence that not only our assumption "forward super-resolution" is very natural in practice, but also the underlying learning mechanisms that we study in this paper (to allow us efficiently train GAN via GDA in theory) simulates the actual learning process of GANs in practice on real-world problems.
翻译:创世对抗网络(GANs)是学习高复杂度、真实世界分布的最成功模式之一,然而,理论上,由于薄荷训练目标的高度非凝固性、非凝固性景观,GAN仍然是最不易理解的深层次学习模式之一。在这项工作中,我们正式研究GANs如何有效地学习某些等级化的分布,这些分布与实际中图像的分布相近。我们证明,当发行有我们称之为 " 前进超级分辨率 " 的结构时,而仅仅培训使用梯度下行率(GDA)的基因化对抗网络确实能够从抽样和时间复杂性两方面有效地学习这种分布。我们还提供了具体的实证证据,证明我们不仅假设“向前超解”在实践中非常自然,而且我们在本文中研究的基本学习机制(使我们能够通过GDA在理论中有效培训GAN)模拟了现实世界问题的实际学习过程。