The problem of approximating a dense matrix by a product of sparse factors is a fundamental problem for many signal processing and machine learning tasks. It can be decomposed into two subproblems: finding the position of the non-zero coefficients in the sparse factors, and determining their values. While the first step is usually seen as the most challenging one due to its combinatorial nature, this paper focuses on the second step, referred to as sparse matrix approximation with fixed support. First, we show its NP-hardness, while also presenting a nontrivial family of supports making the problem practically tractable with a dedicated algorithm. Then, we investigate the landscape of its natural optimization formulation, proving the absence of spurious local valleys and spurious local minima, whose presence could prevent local optimization methods to achieve global optimality. The advantages of the proposed algorithm over state-of-the-art first-order optimization methods are discussed.
翻译:由稀有因素产生的密集矩阵问题对于许多信号处理和机器学习任务来说是一个根本问题。它可以分解成两个子问题:在稀有因素中找到非零系数的位置,并确定其价值。虽然第一步通常因其组合性质而被视为最具挑战性的步骤,但本文件侧重于第二步,被称为在固定支持下稀有矩阵近似。首先,我们展示其NP的硬性,同时也展示了一个支持的非边际组合,使问题能够以专用算法实际解决。然后,我们调查其自然优化公式的景观,证明不存在虚幻的地方山谷和虚伪的地方迷你马,其存在可能妨碍地方优化方法实现全球最佳化。讨论了拟议的算法相对于最先进第一阶优化方法的优势。