Matrix learning is at the core of many machine learning problems. A number of real-world applications such as collaborative filtering and text mining can be formulated as a low-rank matrix completion problem, which recovers incomplete matrix using low-rank assumptions. To ensure that the matrix solution has a low rank, a recent trend is to use nonconvex regularizers that adaptively penalize singular values. They offer good recovery performance and have nice theoretical properties, but are computationally expensive due to repeated access to individual singular values. In this paper, based on the key insight that adaptive shrinkage on singular values improve empirical performance, we propose a new nonconvex low-rank regularizer called "nuclear norm minus Frobenius norm" regularizer, which is scalable, adaptive and sound. We first show it provably holds the adaptive shrinkage property. Further, we discover its factored form which bypasses the computation of singular values and allows fast optimization by general optimization algorithms. Stable recovery and convergence are guaranteed. Extensive low-rank matrix completion experiments on a number of synthetic and real-world data sets show that the proposed method obtains state-of-the-art recovery performance while being the fastest in comparison to existing low-rank matrix learning methods.
翻译:矩阵学习是许多机器学习问题的核心。 一些现实世界应用,例如合作过滤和文本挖掘,可以被设计成低级矩阵完成问题,用低级假设恢复不完全的矩阵完成问题。为了确保矩阵解决方案排名低,最近的趋势是使用非康维克斯正规化的、适应性地惩罚单值的非康维克斯正规化者。它们提供良好的恢复性能,具有良好的理论属性,但由于反复获取单数值而计算成本昂贵。在本文中,根据对单值适应性缩缩缩缩改善经验性业绩的关键洞察,我们提出了一个新的非康维克斯低级常规化的“核规范减去Frobenius规范”常规化器,这是可缩放、适应性和健全的。我们首先可以明显地显示它拥有适应性缩放属性。此外,我们发现其要素化的形式绕过单值的计算,并允许通过一般优化算法快速优化。 稳定的恢复和趋同性得到了保证。在一系列合成和现实数据集上进行广泛的低级矩阵完成实验,表明拟议的方法获得了可升级的状态和低级矩阵恢复方法,同时正在以最快地进行比较。