Deep learning experiments by Cohen et al. [2021] using deterministic Gradient Descent (GD) revealed an Edge of Stability (EoS) phase when learning rate (LR) and sharpness (i.e., the largest eigenvalue of Hessian) no longer behave as in traditional optimization. Sharpness stabilizes around $2/$LR and loss goes up and down across iterations, yet still with an overall downward trend. The current paper mathematically analyzes a new mechanism of implicit regularization in the EoS phase, whereby GD updates due to non-smooth loss landscape turn out to evolve along some deterministic flow on the manifold of minimum loss. This is in contrast to many previous results about implicit bias either relying on infinitesimal updates or noise in gradient. Formally, for any smooth function $L$ with certain regularity condition, this effect is demonstrated for (1) Normalized GD, i.e., GD with a varying LR $\eta_t =\frac{\eta}{\| \nabla L(x(t)) \|}$ and loss $L$; (2) GD with constant LR and loss $\sqrt{L- \min_x L(x)}$. Both provably enter the Edge of Stability, with the associated flow on the manifold minimizing $\lambda_{1}(\nabla^2 L)$. The above theoretical results have been corroborated by an experimental study.
翻译:科恩等人(Cohen et al. [2021] 利用确定性梯度底部(GD) 的深度学习实验发现,当学习率(LR)和锐度(即Hessian最大的egen值)不再像传统优化那样表现时,稳定在2美元/美元/升值左右,损失在迭代之间呈上升和下降趋势,但总体趋势仍然呈下降趋势。当前文件数学分析了EoS 阶段的隐含正规化新机制,即由于非移动性损失而导致的GD更新在最低损失的方方面随着某种确定性流动而演变。这与以前许多关于隐性偏差的结果不同,这些结果要么依靠无限的更新,要么在梯度上发出噪音。形式上,对于任何顺畅的功能,只要有一定的定期性条件,这种效果表现为:(1) 标准化的GD,即GD,即有不同的L$_eta_t {fla\ \ nabla,由于非移动损失情况,GD_x lex legral_l=x legnalalallexx legrodulexxx lexxxxxxx legrevlexxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxllllllxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxllllllllllllllxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxllllllllllllllllllllllllxxxxxxxxxlllllllllllllllllllll