The convergence of stochastic interacting particle systems in the mean-field limit to solutions to conservative stochastic partial differential equations is shown, with optimal rate of convergence. As a second main result, a quantitative central limit theorem for such SPDEs is derived, again with optimal rate of convergence. The results apply in particular to the convergence in the mean-field scaling of stochastic gradient descent dynamics in overparametrized, shallow neural networks to solutions to SPDEs. It is shown that the inclusion of fluctuations in the limiting SPDE improves the rate of convergence, and retains information about the fluctuations of stochastic gradient descent in the continuum limit.
翻译:平均场限中随机交互粒子系统与保守的随机部分差异方程式解决方案的趋同程度显示为最佳趋同率,作为第二个主要结果,得出了此类SPDE的定量中枢定理,也是最佳的趋同率。结果尤其适用于超均分、浅神经网络中随机梯度梯度下降动态的中径缩放与SPDE解决方案的趋同程度。事实证明,在限制SPDE中包括波动会提高趋同率,并保留连续限制中随机梯度梯度下降波动的信息。