This article introduces new multiplicative updates for nonnegative matrix factorization with the $\beta$-divergence and sparse regularization of one of the two factors (say, the activation matrix). It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation. Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem. Our approach leverages a reparametrization of the original problem into the optimization of an equivalent scale-invariant objective function. From there, we derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $\ell_{1}$-regularization or the more "aggressive" log-regularization. In contrast with other state-of-the-art methods, our algorithms are universal in the sense that they can be applied to any $\beta$-divergence (i.e., any value of $\beta$) and that they come with convergence guarantees. We report numerical comparisons with existing heuristic and Lagrangian methods using various datasets: face images, an audio spectrogram, hyperspectral data, and song play counts. We show that our methods obtain solutions of similar quality at convergence (similar objective values) but with significantly reduced CPU times.
翻译:本条引入了新的多复制性更新, 用于非负性矩阵因子化, 包括 $\ beta $- diveg- dignence 和 两种因素之一( 例如, 激活矩阵) 之一( 激活矩阵) 的稀少正规化。 众所周知, 要控制其它因素的规范( 字典矩阵) 的规范, 以避免错误的配制。 标准做法包括限制字典的列, 使其具有单位规范, 从而导致非边际优化问题。 我们的方法将原始问题的重新补偿性转化为对等规模差异性目标功能的优化。 我们从中, 从中, 我们从中, 我们从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 调取 调取, 从中,, 到, 到, 到,, 到,,,,,,,,,,, 到,,,,,, 到 等 等 等 等 等,, 等,,,,,, 等 等, 到, 到 等, 从中 等 等 等, 从中, 从中, 从中, 从中,, 从中 等, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中, 从中,,,, 从中,,,, 从中, 从中, 从中, 从中, 从中, 从中,, 从中