Tensor optimization is crucial to massive machine learning and signal processing tasks. In this paper, we consider tensor optimization with a convex and well-conditioned objective function and reformulate it into a nonconvex optimization using the Burer-Monteiro type parameterization. We analyze the local convergence of applying vanilla gradient descent to the factored formulation and establish a local regularity condition under mild assumptions. We also provide a linear convergence analysis of the gradient descent algorithm started in a neighborhood of the true tensor factors. Complementary to the local analysis, this work also characterizes the global geometry of the best rank-one tensor approximation problem and demonstrates that for orthogonally decomposable tensors the problem has no spurious local minima and all saddle points are strict except for the one at zero which is a third-order saddle point.
翻译:Tensor 优化对于大型机器学习和信号处理任务至关重要。 在本文中, 我们考虑使用布瑞尔- 蒙泰罗 类型参数化, 以精细和完善的客观功能优化, 将它重新配置为非混凝土优化 。 我们分析将香草梯度脱落应用到因子配方的本地趋同性, 并在轻度假设下设定当地规律性条件 。 我们还提供对在真实的沙粒因素附近开始的梯度下降算法的线性趋同性分析 。 作为对本地分析的补充, 这项工作还描述了最佳一等高压近似问题的全球几何性, 并表明对可腐蚀性分解的抗拉器来说, 问题没有虚假的当地迷你马, 除了一个零点是三阶马鞍点之外, 所有马鞍点都非常严格 。