The mean field (MF) theory of multilayer neural networks centers around a particular infinite-width scaling, where the learning dynamics is closely tracked by the MF limit. A random fluctuation around this infinite-width limit is expected from a large-width expansion to the next order. This fluctuation has been studied only in shallow networks, where previous works employ heavily technical notions or additional formulation ideas amenable only to that case. Treatment of the multilayer case has been missing, with the chief difficulty in finding a formulation that captures the stochastic dependency across not only time but also depth. In this work, we initiate the study of the fluctuation in the case of multilayer networks, at any network depth. Leveraging on the neuronal embedding framework recently introduced by Nguyen and Pham, we systematically derive a system of dynamical equations, called the second-order MF limit, that captures the limiting fluctuation distribution. We demonstrate through the framework the complex interaction among neurons in this second-order MF limit, the stochasticity with cross-layer dependency and the nonlinear time evolution inherent in the limiting fluctuation. A limit theorem is proven to relate quantitatively this limit to the fluctuation of large-width networks. We apply the result to show a stability property of gradient descent MF training: in the large-width regime, along the training trajectory, it progressively biases towards a solution with "minimal fluctuation" (in fact, vanishing fluctuation) in the learned output function, even after the network has been initialized at or has converged (sufficiently fast) to a global optimum. This extends a similar phenomenon previously shown only for shallow networks with a squared loss in the ERM setting, to multilayer networks with a loss function that is not necessarily convex in a more general setting.
翻译:多层神经网络的平均值字段( MF) 理论围绕一个特定的无限宽度缩放, 学习动态由 MF 限制密切跟踪。 这种无限宽限的随机波动来自宽度向下一个顺序的大幅扩张。 这种波动只在浅层网络中研究过, 以前的作品使用大量技术概念或额外的配方想法, 只能适合该案例。 多层案例的处理一直缺乏, 找到一种不仅在时间和深度上捕捉一定的扭曲性依赖性的公式时遇到的主要困难。 在这项工作中, 我们开始研究多层网络的波动, 在任何网络深度上。 在Nguyen和Pham最近引入的神经嵌入框架上, 我们系统地生成一个动态方程式系统, 称之为第二级的MF限制, 从而捕捉到有限的波动分布。 我们通过这个框架, 神经系统之间复杂的相互作用不仅在二阶级MF 限制下, 跨层依赖性以及非直线时间进化的网络, 在任何网络中, 多层网络的自动进化的波动功能。 先前已经证明, 将一个巨大的稳定性 递增到 。