Nonconvex regularization has been popularly used in low-rank matrix learning. However, extending it for low-rank tensor learning is still computationally expensive. To address this problem, we develop an efficient solver for use with a nonconvex extension of the overlapped nuclear norm regularizer. Based on the proximal average algorithm, the proposed algorithm can avoid expensive tensor folding/unfolding operations. A special "sparse plus low-rank" structure is maintained throughout the iterations, and allows fast computation of the individual proximal steps. Empirical convergence is further improved with the use of adaptive momentum. We provide convergence guarantees to critical points on smooth losses and also on objectives satisfying the Kurdyka-{\L}ojasiewicz condition. While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem. Experiments on various synthetic and real-world data sets show that the proposed algorithm is efficient in both time and space and more accurate than the existing state-of-the-art.
翻译:在低层次的矩阵学习中,普遍使用非convex 正规化。 然而, 将它推广到低层次的 shor 学习仍然在计算上很昂贵。 为了解决这个问题, 我们开发了一个高效的解决方案, 用的是重叠的核规范正规化器的非convex扩展。 根据近似平均算法, 拟议的算法可以避免昂贵的 Exronor 折叠/ 翻叠操作。 在整个循环中, 保持了一个特殊的“ 粗糙加低级别” 结构, 并允许快速计算个人成熟步骤。 经验上的融合随着适应性势头的利用而得到进一步的改善。 我们为平稳损失的关键点以及达到 Kurdyka- L}jasiewicz 条件的目标提供趋同的保证。 虽然优化问题不是convex 和非 smooth, 但我们表明, 它的关键点仍然具有关于 Exronor 完成问题的统计性表现良好。 对各种合成和现实世界数据集的实验表明, 拟议的算法在时间和空间上都比较有效, 并且比现有的状态更精确。