We analyze in a closed form the learning dynamics of stochastic gradient descent (SGD) for a single-layer neural network classifying a high-dimensional Gaussian mixture where each cluster is assigned one of two labels. This problem provides a prototype of a non-convex loss landscape with interpolating regimes and a large generalization gap. We define a particular stochastic process for which SGD can be extended to a continuous-time limit that we call stochastic gradient flow. In the full-batch limit, we recover the standard gradient flow. We apply dynamical mean-field theory from statistical physics to track the dynamics of the algorithm in the high-dimensional limit via a self-consistent stochastic process. We explore the performance of the algorithm as a function of the control parameters shedding light on how it navigates the loss landscape.
翻译:我们以封闭的形式分析单层神经网络的随机梯度梯度下降(SGD)的学习动态,对高斯混合物进行高度分类,其中每个组群被分配到两个标签中的一个。这个问题提供了非隐形损失场景的原型,带有内插制度和大泛化差距。我们定义了一种特定的随机进程,在这个过程中,SGD可以扩展到一个我们称之为随机梯度梯度流的连续时间限制。在全尺寸限制中,我们恢复了标准梯度流。我们从统计物理学中应用动态平均场理论,通过一个自我一致的随机过程来跟踪高维限度的算法动态。我们探索算法的性能,作为显示它如何导航损失地貌的控制参数的功能。