Tensor completion is a natural higher-order generalization of matrix completion where the goal is to recover a low-rank tensor from sparse observations of its entries. Existing algorithms are either heuristic without provable guarantees, based on solving large semidefinite programs which are impractical to run, or make strong assumptions such as requiring the factors to be nearly orthogonal. In this paper we introduce a new variant of alternating minimization, which in turn is inspired by understanding how the progress measures that guide convergence of alternating minimization in the matrix setting need to be adapted to the tensor setting. We show strong provable guarantees, including showing that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time. Moreover our algorithm is also highly practical and we show that we can complete third order tensors with a thousand dimensions from observing a tiny fraction of its entries. In contrast, and somewhat surprisingly, we show that the standard version of alternating minimization, without our new twist, can converge at a drastically slower rate in practice.
翻译:Tensor 完成是一个自然的更高层次的矩阵完成常规, 目标是从对其条目的零星观测中从零星的观测中恢复一个低层次的分母。 现有的算法要么在解决不切实际运行的大型半无限制程序的基础上,在没有可实现的保证的情况下,是超常的, 要么作出强有力的假设,例如要求各种因素几乎是垂直的。 在本文中,我们引入了一个新的交替最小化变式, 而这又取决于理解如何将指导在矩阵设置中交替最小化的交替最小化的进展措施调整到高频设置。 我们展示了强大的可变的保证, 包括显示我们的算法即使因素高度相关, 也可以在近线性时间内实现线性趋同。 此外, 我们的算法也非常实用, 我们显示我们可以完成第三顺序的数以万计, 从观察其极小部分的条目。 相比之下, 有点令人惊讶的是, 我们显示, 交替最小化的标准版本, 没有我们的新扭曲, 可以在实际中以非常慢的速度趋同的速度趋同。