The recovery of sparse data is at the core of many applications in machine learning and signal processing. While such problems can be tackled using $\ell_1$-regularization as in the LASSO estimator and in the Basis Pursuit approach, specialized algorithms are typically required to solve the corresponding high-dimensional non-smooth optimization for large instances. Iteratively Reweighted Least Squares (IRLS) is a widely used algorithm for this purpose due its excellent numerical performance. However, while existing theory is able to guarantee convergence of this algorithm to the minimizer, it does not provide a global convergence rate. In this paper, we prove that a variant of IRLS converges with a global linear rate to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization, if the measurements fulfill the usual null space property assumption. We support our theory by numerical experiments showing that our linear rate captures the correct dimension dependence. We anticipate that our theoretical findings will lead to new insights for many other use cases of the IRLS algorithm, such as in low-rank matrix recovery.
翻译:在机器学习和信号处理的许多应用中,恢复稀有数据是许多应用的核心。这些问题可以使用LASSO估计值和Basic Description 方法等美元1美元的正规化方法加以解决,但通常需要专门算法来解决大型情况下相应的高维非移动优化问题。循环加权最小广场(IRLS)是用于此目的的一种广泛使用的算法,因为其极好的数值性能。然而,虽然现有的理论能够保证这一算法与最小化者相融合,但它不能提供全球趋同率。在本文中,我们证明IRS的变方与全球线性率趋同为稀薄的解决方案,即如果测量达到通常的无空间财产假设,则从初始化后立即出现线性错误减少。我们通过数字实验支持我们的理论,表明我们的线性率捕捉到正确的尺寸依赖性。我们预计,我们的理论发现将导致对IRS算法的许多其他用途案例有新的洞察,例如低级矩阵恢复。