Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems and machine learning. We introduce a new `decimation' scheme that maps it to neural network models of associative memory and provide a detailed theoretical analysis of its performance, showing that decimation is able to factorize extensive-rank matrices and to denoise them efficiently. We introduce a decimation algorithm based on ground-state search of the neural network, which shows performances that match the theoretical prediction.
翻译:矩阵化是字典学习、建议系统和机器学习过程中遇到的一个重要数学问题。我们引入了一个新的“消化”计划,将它映射为连接记忆神经网络模型,并对其性能进行详细的理论分析,表明消亡能够将大量基质纳入考虑,并有效地消化它们。我们引入基于神经网络地面状态搜索的消亡算法,显示与理论预测相符的性能。