We study the training dynamics of shallow neural networks, in a two-timescale regime in which the stepsizes for the inner layer are much smaller than those for the outer layer. In this regime, we prove convergence of the gradient flow to a global optimum of the non-convex optimization problem in a simple univariate setting. The number of neurons need not be asymptotically large for our result to hold, distinguishing our result from popular recent approaches such as the neural tangent kernel or mean-field regimes. Experimental illustration is provided, showing that the stochastic gradient descent behaves according to our description of the gradient flow and thus converges to a global optimum in the two-timescale regime, but can fail outside of this regime.
翻译:我们在一种两时间尺度区间中研究浅层神经网络的训练动态,在该区间中,内层的步长远小于外层的步长。在这种区间内,我们在一个简单的一元设置中证明了梯度流收敛到非凸优化问题的全局最优解。我们的结果与流行的近期方法如神经切线内核或均值场区别,对于我们的结果,神经元数量不需要是渐进大的。提供了实验说明,显示随机梯度下降按照我们的梯度流描述行为,因此在两时间尺度区间中收敛到全局最优解,但在这个区间外可能会失败。