The problem of low-tubal-rank tensor estimation is a fundamental task with wide applications across high-dimensional signal processing, machine learning, and image science. Traditional approaches tackle such a problem by performing tensor singular value decomposition, which is computationally expensive and becomes infeasible for large-scale tensors. Recent approaches address this issue by factorizing the tensor into two smaller factor tensors and solving the resulting problem using gradient descent. However, this kind of approach requires an accurate estimate of the tensor rank, and when the rank is overestimated, the convergence of gradient descent and its variants slows down significantly or even diverges. To address this problem, we propose an Alternating Preconditioned Gradient Descent (APGD) algorithm, which accelerates convergence in the over-parameterized setting by adding a preconditioning term to the original gradient and updating these two factors alternately. Based on certain geometric assumptions on the objective function, we establish linear convergence guarantees for more general low-tubal-rank tensor estimation problems. Then we further analyze the specific cases of low-tubal-rank tensor factorization and low-tubal-rank tensor recovery. Our theoretical results show that APGD achieves linear convergence even under over-parameterization, and the convergence rate is independent of the tensor condition number. Extensive simulations on synthetic data are carried out to validate our theoretical assertions.
翻译:低管秩张量估计是一个基础性问题,在高维信号处理、机器学习和图像科学等领域具有广泛应用。传统方法通过执行张量奇异值分解来解决此类问题,但计算成本高昂,对于大规模张量而言不可行。近期方法通过将张量分解为两个较小的因子张量,并利用梯度下降求解所得问题来应对这一挑战。然而,此类方法需要准确估计张量秩,当秩被高估时,梯度下降及其变体的收敛速度会显著减缓甚至发散。为解决该问题,我们提出一种交替预条件梯度下降算法,通过在原始梯度中添加预条件项并交替更新这两个因子,在过参数化设置下加速收敛。基于目标函数的若干几何假设,我们为更一般的低管秩张量估计问题建立了线性收敛保证。随后我们进一步分析了低管秩张量分解与低管秩张量恢复的具体案例。理论结果表明,APGD即使在过参数化条件下也能实现线性收敛,且收敛速率与张量条件数无关。我们在合成数据上进行了大量仿真实验以验证理论结论。