The tensor train (TT) format enjoys appealing advantages in handling structural high-order tensors. The recent decade has witnessed the wide applications of TT-format tensors from diverse disciplines, among which tensor completion has drawn considerable attention. Numerous fast algorithms, including the Riemannian gradient descent (RGrad) algorithm, have been proposed for the TT-format tensor completion. However, the theoretical guarantees of these algorithms are largely missing or sub-optimal, partly due to the complicated and recursive algebraic operations in TT-format decomposition. Moreover, existing results established for the tensors of other formats, for example, Tucker and CP, are inapplicable because the algorithms treating TT-format tensors are substantially different and more involved. In this paper, we provide, to our best knowledge, the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion, under a nearly optimal sample size condition. The RGrad algorithm converges linearly with a constant contraction rate that is free of tensor condition number without the necessity of re-conditioning. We also propose a novel approach, referred to as the sequential second-order moment method, to attain a warm initialization under a similar sample size requirement. As a byproduct, our result even significantly refines the prior investigation of RGrad algorithm for matrix completion. Numerical experiments confirm our theoretical discovery and showcase the computational speedup gained by the TT-format decomposition.
翻译:高压列车(TTT)形式在处理结构高阶变压器方面享有极强的优势。最近十年见证了来自不同学科的TT-格式变压器的广泛应用,其中,极端完成引起了相当的注意。许多快速算法,包括里曼尼梯梯度下降算法(RGrad)算法,是为TT-格式变压器完成而提出的。然而,这些算法的理论保障基本上缺失或亚最佳,部分原因是TT-格式变异的复杂和循环变异操作。此外,为其他格式的变压器(例如塔克和CP)建立的现有结果不适用,因为处理TTT-格式变压器的算法大不相同,而且涉及更多的问题。在本文件中,我们根据我们的最佳知识,为TT-格式变压法完成的拉格德算法的合并提供了第一个理论保证,其条件几乎是最佳的样本大小。 累进算算法与不断的递增缩率相趋近,而无需再作调整。我们还提议一种新式的升级的升级方法,即以先期的升级为升级的升级的升级后算法。