This paper studies the rates of convergence for learning distributions implicitly with the adversarial framework and Generative Adversarial Networks (GANs), which subsume Wasserstein, Sobolev, MMD GAN, and Generalized/Simulated Method of Moments (GMM/SMM) as special cases. We study a wide range of parametric and nonparametric target distributions under a host of objective evaluation metrics. We investigate how to obtain valid statistical guarantees for GANs through the lens of regularization. On the nonparametric end, we derive the optimal minimax rates for distribution estimation under the adversarial framework. On the parametric end, we establish a theory for general neural network classes (including deep leaky ReLU networks) that characterizes the interplay on the choice of generator and discriminator pair. We discover and isolate a new notion of regularization, called the generator-discriminator-pair regularization, that sheds light on the advantage of GANs compared to classical parametric and nonparametric approaches for explicit distribution estimation. We develop novel oracle inequalities as the main technical tools for analyzing GANs, which are of independent interest.
翻译:本文研究与对抗性框架和General Adversarial Networks(GANs)暗含的学习分布汇合率,后者以Vasserstein、Sobolev、MMD GAN和通用/模拟模型模型(GMM/SMM)为特例。我们根据一系列客观评价指标研究一系列广泛的参数和非参数目标分布。我们调查如何从正规化的角度为GANs获得有效的统计保障。在非参数方面,我们在对抗性框架下得出最佳的分布估计微缩速率。在参数方面,我们为普通神经网络类(包括深度漏泄的ReLU网络)建立理论,该理论在选择生成器和区别器时具有相互作用的特点。我们发现并分离出一种新的规范概念,称为发电机-差异器-定位器,这说明GANs相对于典型的参数和非对明确分布估计法的优势。我们开发了新颖或骨骼不平等,作为分析GANs的主要技术工具,这是独立的兴趣。