Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. Here we use a neural network Gaussian process (NNGP) which maps exactly to a fully connected network (FCN) in the infinite-width limit, combined with techniques from random matrix theory, to calculate this generalisation behaviour. An advantage of our NNGP approach is that the analytical calculations are easier to interpret. We argue that the fact that the generalisation error of neural networks decreases in the overparameterized regime and has a finite theoretical value is explained by the convergence to their limiting Gaussian processes. Our analysis thus provides a mathematical explanation for a surprising phenomenon that could not explained by conventional statistical learning theory. However, understanding what drives these finite theoretical values to be the state-of-the-art generalisation performances in many applications remains an open question, for which we only provide new leads in this paper.
翻译:神经网络中的双光曲线描述了一个现象,即超光速错误最初随着参数的增加而下降,然后在达到比数据点数量少的最优参数数目之后增长,然后又在超分化制度中再次下降。在这里,我们使用神经网络Gaussian进程(NNGP)来绘制无限宽限完全连接的网络(FCN),加上随机矩阵理论的技术,以计算这种普遍化行为。我们的NGP方法的一个优点是分析计算更容易解释。我们争辩说,神经网络的普遍化错误在超分制制度中减少,并且具有有限的理论价值,这是由它们与限制高斯进程趋同而来解释的。因此,我们的分析为一种无法用常规统计学理论解释的惊人现象提供了数学解释。然而,理解这些有限的理论价值是什么驱动着许多应用中最先进的一般化表现,这仍然是个开放的问题,我们在此文件中只提供了新的线索。