We provide an explicit analysis of the dynamics of vanilla gradient descent for deep matrix factorization in a setting where the minimizer of the loss function is unique. We show that the recovery rate of ground-truth eigenvectors is proportional to the magnitude of the corresponding eigenvalues and that the differences among the rates are amplified as the depth of the factorization increases. For exactly characterized time intervals, the effective rank of gradient descent iterates is provably close to the effective rank of a low-rank projection of the ground-truth matrix, such that early stopping of gradient descent produces regularized solutions that may be used for denoising, for instance. In particular, apart from few initial steps of the iterations, the effective rank of our matrix is monotonically increasing, suggesting that "matrix factorization implicitly enforces gradient descent to take a route in which the effective rank is monotone". Since empirical observations in more general scenarios such as matrix sensing show a similar phenomenon, we believe that our theoretical results shed some light on the still mysterious "implicit bias" of gradient descent in deep learning.
翻译:我们明确分析香草梯度下沉的动态,以便在损失功能最小化的独特环境下进行深层矩阵化。我们表明,地面精度的恢复率与相应的精度值的大小成正比,而且随着因子化深度的增加,不同率之间的差别会扩大。对于精确的时段间隔,坡梯度下沉的延绳的有效等级可以明显接近于低水平地面精度矩阵投影的有效等级,例如尽早停止梯度下沉可产生常规化的解决方案,例如可用于除去。特别是,除了迭代的最初步骤很少之外,我们矩阵的有效等级是单调式的,这意味着“矩阵化隐含着梯度下坠,以采取有效阶梯度为单调的路线”。由于在诸如矩阵感测等更为笼统的假设中进行的经验观测表明类似现象,我们认为,我们的理论结果揭示了在深层学习中梯度下降时仍然神秘的“不准确的偏差”现象。