We develop a mathematically rigorous framework for multilayer neural networks in the mean field regime. As the network's widths increase, the network's learning trajectory is shown to be well captured by a meaningful and dynamically nonlinear limit (the \textit{mean field} limit), which is characterized by a system of ODEs. Our framework applies to a broad range of network architectures, learning dynamics and network initializations. Central to the framework is the new idea of a \textit{neuronal embedding}, which comprises of a non-evolving probability space that allows to embed neural networks of arbitrary widths. Using our framework, we prove several properties of large-width multilayer neural networks. Firstly we show that independent and identically distributed initializations cause strong degeneracy effects on the network's learning trajectory when the network's depth is at least four. Secondly we obtain several global convergence guarantees for feedforward multilayer networks under a number of different setups. These include two-layer and three-layer networks with independent and identically distributed initializations, and multilayer networks of arbitrary depths with a special type of correlated initializations that is motivated by the new concept of \textit{bidirectional diversity}. Unlike previous works that rely on convexity, our results admit non-convex losses and hinge on a certain universal approximation property, which is a distinctive feature of infinite-width neural networks and is shown to hold throughout the training process. Aside from being the first known results for global convergence of multilayer networks in the mean field regime, they demonstrate flexibility of our framework and incorporate several new ideas and insights that depart from the conventional convex optimization wisdom.
翻译:我们开发了一个数学严谨的框架, 用于在平均字段系统中的多层神经网络。 随着网络宽度的增加, 网络的学习轨迹被一个有意义和动态的非线性限制(\ textit{ mean field} fire} 限制)很好地捕捉到。 我们的框架适用于一系列网络架构、 学习动态和网络初始化。 这个框架的核心是 \ textit{ 中子嵌入} 的新理念, 其中包括一个非变化的概率空间, 可以嵌入任意宽度的神经网络。 我们利用我们的框架, 证明了大宽度多层神经网络(\ textitleit{ mode} 的几种特性性能性能性能。 当网络深度至少为四点时, 我们的框架会适用于网络的学习轨迹。 我们框架的核心是: 在多个不同的设置下, 包括两层和三层网络, 可以嵌入任意的线性网络。 在整个框架中, 我们的多层 直径直径直线性规则化的网络会将一些特殊性 。