The robust PCA of covariance matrices plays an essential role when isolating key explanatory features. The currently available methods for performing such a low-rank plus sparse decomposition are matrix specific, meaning, those algorithms must re-run for every new matrix. Since these algorithms are computationally expensive, it is preferable to learn and store a function that instantaneously performs this decomposition when evaluated. Therefore, we introduce Denise, a deep learning-based algorithm for robust PCA of covariance matrices, or more generally of symmetric positive semidefinite matrices, which learns precisely such a function. Theoretical guarantees for Denise are provided. These include a novel universal approximation theorem adapted to our geometric deep learning problem, convergence to an optimal solution of the learning problem and convergence of the training scheme. Our experiments show that Denise matches state-of-the-art performance in terms of decomposition quality, while being approximately 2000x faster than the state-of-the-art, PCP, and 200x faster than the current speed optimized method, fast PCP.
翻译:强大的常态基质五氯苯甲醚在分离关键解释性特征时发挥着必不可少的作用。目前用于进行这种低级加稀疏分解分解的方法是特定的矩阵,意思是,这些算法必须重新运行到每一个新的矩阵中。由于这些算法成本高昂,因此最好学习并储存一个在评估时瞬间进行分解的函数。因此,我们引入Denise,这是为强大的常态基质五氯苯或更一般的正对称半确定性基质基质而采用的深层次的基于学习的算法,精确地学习了这样一个函数。Denise得到了理论上的保证。其中包括一种新颖的通用近似理论,适应了我们的几何深度学习问题,趋同了学习问题的最佳解决办法,并融合了培训计划。我们的实验表明,Denise在分解性质量方面符合最新表现,同时比目前最先进的方法,即快速的五氯苯酚和200x速度要快。