A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a $1$-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.
翻译:一种广泛的未经监督的深层次学习方法,如General Adversarial Networks (GANs),涉及对模型参数数量超过某一阈值的多度分解模型的培训。监督学习的大量工作表明,在梯度下降的趋同与全球最佳解决方案的趋同中,模型的超度分解很重要。相比之下,未经监督的设置,特别是GANs, 特别涉及非conved Adversarial Network(GANs) 等非Ceneral Adversarial Networks (GADAs) 等非Centrial Description 模型过度分解模型的优化问题。在GANs(GANs) 的分解法中,GANseral-dealizervormation(GADA) 中,GANseral-DA的分解比我们第一个非Convex 的分解点的分解点的作用和好处远远小于GAN-DAslations 的分解,这是我们在GAN-DAslation GAslation 和GAL-maxlation 的模型中最高级数据结果。