Factorization of matrices where the rank of the two factors diverges linearly with their sizes has many applications in diverse areas such as unsupervised representation learning, dictionary learning or sparse coding. We consider a setting where the two factors are generated from known component-wise independent prior distributions, and the statistician observes a (possibly noisy) component-wise function of their matrix product. In the limit where the dimensions of the matrices tend to infinity, but their ratios remain fixed, we expect to be able to derive closed form expressions for the optimal mean squared error on the estimation of the two factors. However, this remains a very involved mathematical and algorithmic problem. A related, but simpler, problem is extensive-rank matrix denoising, where one aims to reconstruct a matrix with extensive but usually small rank from noisy measurements. In this paper, we approach both these problems using high-temperature expansions at fixed order parameters. This allows to clarify how previous attempts at solving these problems failed at finding an asymptotically exact solution. We provide a systematic way to derive the corrections to these existing approximations, taking into account the structure of correlations particular to the problem. Finally, we illustrate our approach in detail on the case of extensive-rank matrix denoising. We compare our results with known optimal rotationally-invariant estimators, and show how exact asymptotic calculations of the minimal error can be performed using extensive-rank matrix integrals.
翻译:在两个因素的等级与其大小有线性差异的矩阵的量化方面,这两个因素的等级在不同的领域,例如无人监督的代表性学习、字典学习或编码稀疏等,有许多应用。我们考虑的是两个因素来自已知的组件独立的先前分布,统计员观察的是其矩阵产品的一个(可能很吵的)部分功能。在矩阵的尺寸趋向无限但比率保持不变的限度内,我们期望能够为这两个因素的估计中的最佳平均正方差错误得出封闭的形式表达方式。然而,这仍然是一个非常涉及数学和算法的问题。一个相关但更简单的问题是一个大层次的矩阵脱色化,其中一个人的目的是重建矩阵,从杂乱的测量中得出广泛但通常很小的等级。在本文中,我们用固定的顺序参数高温度的扩展来处理这些问题。这样就可以澄清以前解决这些问题的尝试如何未能找到一个简单准确的本性解决方案。我们以系统的方法来对这些现有的近似的校正进行校正,我们考虑到我们所了解的精确的矩阵的精确性分析结果,最后我们用最深层次的模型来说明我们所了解的精确的模型的精确性结果。