Many recent problems in signal processing and machine learning such as compressed sensing, image restoration, matrix/tensor recovery, and non-negative matrix factorization can be cast as constrained optimization. Projected gradient descent is a simple yet efficient method for solving such constrained optimization problems. Local convergence analysis furthers our understanding of its asymptotic behavior near the solution, offering sharper bounds on the convergence rate compared to global convergence analysis. However, local guarantees often appear scattered in problem-specific areas of machine learning and signal processing. This manuscript presents a unified framework for the local convergence analysis of projected gradient descent in the context of constrained least squares. The proposed analysis offers insights into pivotal local convergence properties such as the condition of linear convergence, the region of convergence, the exact asymptotic rate of convergence, and the bound on the number of iterations needed to reach a certain level of accuracy. To demonstrate the applicability of the proposed approach, we present a recipe for the convergence analysis of PGD and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems, namely, linearly constrained least squares, sparse recovery, least squares with the unit norm constraint, and matrix completion.
翻译:信号处理和机器学习方面的许多最近问题,如压缩感测、图像恢复、矩阵/加速恢复和非负矩阵化等,可视为限制优化。预测梯度下降是解决这种限制优化问题的一个简单而有效的方法。地方趋同分析进一步加深了我们对接近解决方案的无症状行为的理解,提供了与全球趋同分析相比,趋同率的更明确界限。然而,地方保障往往分散在机器学习和信号处理等特定问题领域。本稿为在有限的最平方范围内对预测梯度下降进行当地趋同分析提供了一个统一框架。拟议的分析对关键的地方趋同特性提供了深刻的见解,例如线性趋同条件、趋同区域、精确的趋同率以及达到一定准确度所需的迭代数。为表明拟议办法的适用性,我们为对PGD的趋同性分析提供了一种配方,并通过对四种基本问题,即线性受限制最小方形、零度恢复、最小方形与单位规范、矩阵和完成的配方表示方式的开始和最后应用。