Understanding the geometry of the loss landscape near a minimum is key to explaining the implicit bias of gradient-based methods in non-convex optimization problems such as deep neural network training and deep matrix factorization. A central quantity to characterize this geometry is the maximum eigenvalue of the Hessian of the loss, which measures the sharpness of the landscape. Currently, its precise role has been obfuscated because no exact expressions for this sharpness measure were known in general settings. In this paper, we present the first exact expression for the maximum eigenvalue of the Hessian of the squared-error loss at any minimizer in general overparameterized deep matrix factorization (i.e., deep linear neural network training) problems, resolving an open question posed by Mulayoff & Michaeli (2020). This expression uncovers a fundamental property of the loss landscape of depth-2 matrix factorization problems: a minimum is flat if and only if it is spectral-norm balanced, which implies that flat minima are not necessarily Frobenius-norm balanced. Furthermore, to complement our theory, we empirically investigate an escape phenomenon observed during gradient-based training near a minimum that crucially relies on our exact expression of the sharpness.
翻译:理解损失函数在极小值附近的几何结构,对于解释梯度方法在非凸优化问题(如深度神经网络训练和深度矩阵分解)中的隐式偏置至关重要。描述该几何结构的一个核心量是损失函数海森矩阵的最大特征值,它衡量了损失函数曲面的锐度。目前,由于该锐度度量在一般设置下缺乏精确表达式,其确切作用一直未能明晰。本文首次给出了在一般过参数化深度矩阵分解(即深度线性神经网络训练)问题中,任意极小值处平方误差损失的海森矩阵最大特征值的精确表达式,从而解决了Mulayoff & Michaeli (2020) 提出的一个开放性问题。该表达式揭示了深度为2的矩阵分解问题损失函数曲面的一个基本性质:一个极小值是平坦的,当且仅当它是谱范数平衡的,这意味着平坦极小值不一定是Frobenius范数平衡的。此外,为了补充我们的理论,我们通过实验研究了在极小值附近基于梯度的训练中观察到的逃逸现象,该现象关键依赖于我们对锐度的精确表达式。