Robust principal component analysis (RPCA) is a critical tool in modern machine learning, which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose a scalable and learnable non-convex approach for high-dimensional RPCA problems, which we call Learned Robust PCA (LRPCA). LRPCA is highly efficient, and its free parameters can be effectively learned to optimize via deep unfolding. Moreover, we extend deep unfolding from finite iterations to infinite iterations via a novel feedforward-recurrent-mixed neural network model. We establish the recovery guarantee of LRPCA under mild assumptions for RPCA. Numerical experiments show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as ScaledGD and AltProj, on both synthetic datasets and real-world applications.
翻译:强力主元件分析(RPCA)是现代机器学习的关键工具,它能探测到低级矩阵重建任务中的异端。在本文中,我们建议对高维的 RPCA 问题采取可扩展和可学习的不可分的方法,我们称之为Clearn Robust PCA(LRPCA ) 。 LRPCA 效率很高,其自由参数可以通过深层演化有效学习以优化。此外,我们通过新颖的进料-经常混合神经网络模型,将有限的迭代深度从有限的迭代扩展至无限的迭代。我们在RPCA的轻度假设下建立了LRPCCA的恢复保证。 数字实验显示,LRPCA在合成数据集和现实世界应用程序上都超越了最新的RPCA算法,如ScrodudGD和AltProj。