We study overparameterization in generative adversarial networks (GANs) that can interpolate the training data. We show that overparameterization can improve generalization performance and accelerate the training process. We study the generalization error as a function of latent space dimension and identify two main behaviors, depending on the learning setting. First, we show that overparameterized generative models that learn distributions by minimizing a metric or $f$-divergence do not exhibit double descent in generalization errors; specifically, all the interpolating solutions achieve the same generalization error. Second, we develop a new pseudo-supervised learning approach for GANs where the training utilizes pairs of fabricated (noise) inputs in conjunction with real output samples. Our pseudo-supervised setting exhibits double descent (and in some cases, triple descent) of generalization errors. We combine pseudo-supervision with overparameterization (i.e., overly large latent space dimension) to accelerate training while performing better, or close to, the generalization performance without pseudo-supervision. While our analysis focuses mostly on linear GANs, we also apply important insights for improving generalization of nonlinear, multilayer GANs.
翻译:我们研究了能够对培训数据进行内插的基因对抗网络(GANs)中的超度参数。我们表明,超度参数化可以提高通用性能并加快培训过程。我们研究了作为潜在空间层面函数的通用错误,并根据学习环境确定两种主要行为。首先,我们表明,超度的超度基因化模型,通过最小化一个公吨或美元差价来学习分布,在一般化错误中不会出现双度下降;具体地说,所有超度化解决方案都会产生相同的一般化错误。第二,我们为GANs开发了一个新的假的伪监督性学习方法,在这种方法中,培训利用了与实际输出样本相结合的造(噪音)投入组合。我们假的超度化模型设置了普遍化错误的双向(和在某些情况下,三度下降)。我们把伪超度镜化和超度度化(即过大潜值空间维度)结合起来,以加速培训,同时进行更好的或接近一般化性能,而没有假超度。我们的分析主要侧重于直线式GANs,我们也应用重要的多级洞察来改进一般化。