We study the tensor-on-tensor regression, where the goal is to connect tensor responses to tensor covariates with a low Tucker rank parameter tensor/matrix without the prior knowledge of its intrinsic rank. We propose the Riemannian gradient descent (RGD) and Riemannian Gauss-Newton (RGN) methods and cope with the challenge of unknown rank by studying the effect of rank over-parameterization. We provide the first convergence guarantee for the general tensor-on-tensor regression by showing that RGD and RGN respectively converge linearly and quadratically to a statistically optimal estimate in both rank correctly-parameterized and over-parameterized settings. Our theory reveals an intriguing phenomenon: Riemannian optimization methods naturally adapt to over-parameterization without modifications to their implementation. We also give the first rigorous evidence for the statistical-computational gap in scalar-on-tensor regression under the low-degree polynomials framework. Our theory demonstrates a ``blessing of statistical-computational gap" phenomenon: in a wide range of scenarios in tensor-on-tensor regression for tensors of order three or higher, the computationally required sample size matches what is needed by moderate rank over-parameterization when considering computationally feasible estimators, while there are no such benefits in the matrix settings. This shows moderate rank over-parameterization is essentially ``cost-free" in terms of sample size in tensor-on-tensor regression of order three or higher. Finally, we conduct simulation studies to show the advantages of our proposed methods and to corroborate our theoretical findings.
翻译:我们研究强力对强力回归,目的是在不事先了解其内在等级的情况下,将强力反应与强力共变异与低塔级参数抗拉/矩阵连接起来。我们提出里曼尼梯度下沉(RGD)和里曼尼加乌斯-牛顿(RGN)方法,通过研究超参数等级的影响来应对无名军衔的挑战。我们通过显示RGD和RGN在低度超力下回归中直线和正方位下向一个统计上的最佳估计值。我们的理论揭示了一种奇怪的现象:里曼性优化方法自然地适应超标度的参数,而没有修改它们的执行。我们还给出了在低度超度超度多米框架下,在高度降压下回归率回归率下,我们的理论显示,在高度的基底值下,在高度的基底值分析中, 高阶的理论显示“低度的”的统计-中位回归值的下降值,在高阶的模型分析中, 显示我们所需要的三级分析的顺序是高阶级的顺序,在高阶下,在高阶的计算中,在高度的模型的模型中, 显示我们需要的轨道上下,在高阶下,我们需要的测序下,在高阶下,这需要的变。