Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is to study how initialization and overparametrization affect convergence and implicit bias of training algorithms. In this paper, we present a novel analysis of single-hidden-layer linear networks trained under gradient flow, which connects initialization, optimization, and overparametrization. Firstly, we show that the squared loss converges exponentially to its optimum at a rate that depends on the level of imbalance and the margin of the initialization. Secondly, we show that proper initialization constrains the dynamics of the network parameters to lie within an invariant set. In turn, minimizing the loss over this set leads to the min-norm solution. Finally, we show that large hidden layer width, together with (properly scaled) random initialization, ensures proximity to such an invariant set during training, allowing us to derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.
翻译:在随机初始化和没有任何正规化的情况下,通过梯度下降而培训的神经网络,尽管高度偏差化,但在实践中,通过随机初始化和没有任何正规化而培训的神经网络具有良好的概括性表现。解释这一现象的一个很有希望的方向是研究初始化和超平衡化如何影响培训算法的趋同和隐含的偏差。在本文中,我们介绍了对在梯度流下培训的单隐藏层线网络的新分析,这种网络将初始化、优化和超平衡化联系起来。首先,我们表明平方损失成倍接近其最佳状态,其速度取决于不平衡程度和初始化的幅度。第二,我们表明适当的初始化限制了网络参数的动态,使之处于一个不变化状态中。反过来,最大限度地减少这一组合的损失导致中度的溶液。最后,我们表明,巨大的隐藏层宽度,加上(适当缩放的)随机初始化,确保了在培训期间与这种变量设置的接近,从而使我们能够在受过训练的网络和中调的距离上方线上找到一个新的非封闭性的上方点。