Attempts of studying implicit regularization associated to gradient descent (GD) have identified matrix completion as a suitable test-bed. Late findings suggest that this phenomenon cannot be phrased as a minimization-norm problem, implying that a paradigm shift is required and that dynamics has to be taken into account. In the present work we address the more general setup of tensor completion by leveraging two popularized tensor factorization, namely Tucker and TensorTrain (TT). We track relevant quantities such as tensor nuclear norm, effective rank, generalized singular values and we introduce deep Tucker and TT unconstrained factorization to deal with the completion task. Experiments on both synthetic and real data show that gradient descent promotes solution with low-rank, and validate the conjecture saying that the phenomenon has to be addressed from a dynamical perspective.
翻译:研究与梯度下降有关的隐性正规化的尝试已经将矩阵完成确定为适当的试验台。最近的调查结果表明,不能将这一现象说成是最小化的规范问题,这意味着需要范式转变,必须考虑到动态因素。在目前的工作中,我们通过利用两个普及的“指数”因素,即塔克和TensorTrain(TTT),处理更普遍的“指数”完成结构。 我们追踪了相关数量,如强核规范、有效等级、通用单值等,我们引入深塔克和TTT的不受限制的指数化来处理完成任务。 合成数据和真实数据的实验表明,“梯度下降”促进低级解决方案,并验证关于必须从动态角度处理这一现象的推测。