Traditional analyses of gradient descent show that when the largest eigenvalue of the Hessian, also known as the sharpness $S(\theta)$, is bounded by $2/\eta$, training is "stable" and the training loss decreases monotonically. Recent works, however, have observed that this assumption does not hold when training modern neural networks with full batch or large batch gradient descent. Most recently, Cohen et al. (2021) observed two important phenomena. The first, dubbed progressive sharpening, is that the sharpness steadily increases throughout training until it reaches the instability cutoff $2/\eta$. The second, dubbed edge of stability, is that the sharpness hovers at $2/\eta$ for the remainder of training while the loss continues decreasing, albeit non-monotonically. We demonstrate that, far from being chaotic, the dynamics of gradient descent at the edge of stability can be captured by a cubic Taylor expansion: as the iterates diverge in direction of the top eigenvector of the Hessian due to instability, the cubic term in the local Taylor expansion of the loss function causes the curvature to decrease until stability is restored. This property, which we call self-stabilization, is a general property of gradient descent and explains its behavior at the edge of stability. A key consequence of self-stabilization is that gradient descent at the edge of stability implicitly follows projected gradient descent (PGD) under the constraint $S(\theta) \le 2/\eta$. Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training, which we verify both empirically in a number of standard settings and theoretically under mild conditions. Our analysis uncovers the mechanism for gradient descent's implicit bias towards stability.
翻译:传统梯度下降的分析显示,当 Hessian 矩阵的最大特征值(也称为锐度 $S(\theta)$)被限制在 $2/\eta$ 范围内时,训练是“稳定的”,训练损失会单调下降。然而,最近的研究发现,当用全批次或大批次梯度下降训练现代神经网络时,这种假设并不成立。最近,Cohen 等人(2021)观察到两个重要的现象。第一个现象被称为渐进锐化,即锐度在整个训练过程中稳步增加,直到达到不稳定截止点 $2/\eta$。第二个现象被称为稳定边缘,即锐度在训练的剩余部分停留在 $2/\eta$,同时损失继续下降,虽然是非单调的。我们证明,梯度下降在稳定边缘处的动力学并不是混沌的,而可以被一个三次泰勒展开式所捕获:随着迭代向 Hessian 的前几个特征向量方向发散,损失函数的局部泰勒展开式中的三次项会导致曲率减小,直到恢复稳定性。这个我们称之为自稳定性的特性是梯度下降的一个普遍属性,可解释其在稳定边缘处的行为。自稳定性的一个重要后果是,梯度下降在约束 $S(\theta) \le 2/\eta$ 的条件下隐式地遵循了投影梯度下降(PGD)。我们的分析提供了关于损失、锐度和距离 PGD 轨迹的准确预测,我们在许多标准设置下进行了实证验证,并在温和条件下进行了理论验证。我们的分析揭示了梯度下降对稳定性的隐式偏差机制。