This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples. Our main results establish the convergence rates of GANs under a collection of integral probability metrics defined through H\"older classes, including the Wasserstein distance as a special case. We also show that GANs are able to adaptively learn data distributions with low-dimensional structures or have H\"older densities, when the network architectures are chosen properly. In particular, for distributions concentrated around a low-dimensional set, we show that the learning rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. Our analysis is based on a new oracle inequality decomposing the estimation error into the generator and discriminator approximation error and the statistical error, which may be of independent interest.
翻译:本文研究基因对抗网络(GANs)如何从有限的样本中很好地学习概率分布。 我们的主要结果确定GANs在通过H\"老类"定义的综合概率度量的集合下的总合率, 包括作为特例的瓦瑟斯坦距离。 我们还表明,当网络结构被正确选择时,GANs能够适应性地学习低维结构的数据分布,或者有H\'older密度。 特别是对于集中在低维集的分布,我们显示GANs的学习率并不取决于高环境维度,而是取决于较低的内在维度。我们的分析基于一个新的甲状腺不平等,将估计错误分解到生成器和偏差近误以及统计错误,这可能具有独立的兴趣。