Recent theoretical works have characterized the dynamics of wide shallow neural networks trained via gradient descent in an asymptotic mean-field limit when the width tends towards infinity. At initialization, the random sampling of the parameters leads to deviations from the mean-field limit dictated by the classical Central Limit Theorem (CLT). However, since gradient descent induces correlations among the parameters, it is of interest to analyze how these fluctuations evolve. Here, we use a dynamical CLT to prove that the asymptotic fluctuations around the mean limit remain bounded in mean square throughout training. The upper bound is given by a Monte-Carlo resampling error, with a variance that that depends on the 2-norm of the underlying measure, which also controls the generalization error. This motivates the use of this 2-norm as a regularization term during training. Furthermore, if the mean-field dynamics converges to a measure that interpolates the training data, we prove that the asymptotic deviation eventually vanishes in the CLT scaling. We also complement these results with numerical experiments.
翻译:最近的理论著作已经描述了在宽度趋向无限时,通过梯度平均场限,通过梯度下降训练的宽浅神经网络的动态。 在初始化时,参数的随机抽样导致偏离古典中央限制理论(CLT)规定的平均场限。然而,由于梯度下降引起参数之间的相互关系,因此有兴趣分析这些波动是如何演变的。在这里,我们使用动态的CLT来证明,平均限值周围的无症状波动在整个培训过程中仍以平均平方为界限。上限由Monte-Carlo的重新取样错误给出,这一差异取决于基本计量的2个北点,后者也控制着一般度错误。这促使在培训期间使用这一2个北点作为正规化术语。此外,如果平均场动态结合到一个将培训数据相互调和的尺度,我们就能证明,平均限值的偏差最终会在CLT的缩放过程中消失。我们还用数字实验来补充这些结果。