The convergence of stochastic interacting particle systems in the mean-field limit to solutions of conservative stochastic partial differential equations is established, with optimal rate of convergence. As a second main result, a quantitative central limit theorem for such SPDEs is derived, again, with optimal rate of convergence. The results apply, in particular, to the convergence in the mean-field scaling of stochastic gradient descent dynamics in overparametrized, shallow neural networks to solutions of SPDEs. It is shown that the inclusion of fluctuations in the limiting SPDE improves the rate of convergence, and retains information about the fluctuations of stochastic gradient descent in the continuum limit.
翻译:平均场限中随机交互粒子系统与保守的随机部分差异方程式解决方案的趋同程度得到确定,最佳的趋同率达到最佳的趋同率,作为第二个主要结果,又得出了此类SPDE的定量中央定理,也是最佳的趋同率。结果尤其适用于过度对称的浅神经网络中随机梯度梯度下沉动态的中场缩放率与SPDE解决方案的趋同程度。事实证明,在限制的SPDE中包括波动会提高趋同率,并保留关于连续限中随机梯度梯度下降波动的信息。