We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size. This is in contrast with most existing literature concerned with the low-rank (i.e., constant-rank) regime. We first consider a class of rotationally invariant matrix denoising problems whose mutual information and minimum mean-square error are computable using standard techniques from random matrix theory. Next, we analyze the more challenging models of dictionary learning. To do so we introduce a novel combination of the replica method from statistical mechanics together with random matrix theory, coined spectral replica method. It allows us to conjecture variational formulas for the mutual information between hidden representations and the noisy data of the dictionary learning problem, as well as for the overlaps quantifying the optimal reconstruction error. The proposed methods reduce the number of degrees of freedom from $\Theta(N^2)$ (matrix entries) to $\Theta(N)$ (eigenvalues or singular values), and yield Coulomb gas representations of the mutual information which are reminiscent of matrix models in physics. The main ingredients are the use of HarishChandra-Itzykson-Zuber spherical integrals combined with a new replica symmetric decoupling ansatz at the level of the probability distributions of eigenvalues (or singular values) of certain overlap matrices.
翻译:我们考虑在贝亚-最理想的环境中,在具有挑战性的制度中,矩阵脱色和字典学习的模型日益复杂,在这种制度下,用于推断的矩阵具有随着系统大小而线性增长的等级。这与大多数与低级别(即常态)制度有关的现有文献形成对照。我们首先考虑的是,在使用随机矩阵理论的标准技术来比较相互信息和最小平均方差差差错的交替性矩阵脱色问题类别。接着,我们分析更具挑战性的字典学习模型。为了这样做,我们将统计机械学的复制方法与随机矩阵理论、生成的光谱复制法方法的新型组合结合起来。这使我们能够对隐藏的表述和词典学习问题的杂乱数据之间的相互信息,以及用来量化最佳重建错误的重叠性矩阵。拟议方法将自由度从 $Thetzia(N=2) 降低到 $\ Theta(基值或奇数) 的矩阵。为了这样做,我们采用了一种新型矩阵的复制方法,并生成了库伦基值的模型的变式公式,用以推断一些共同的基质的基数模型。