The principle of majorization-minimization (MM) provides a general framework for eliciting effective algorithms to solve optimization problems. However, they often suffer from slow convergence, especially in large-scale and high-dimensional data settings. This has drawn attention to acceleration schemes designed exclusively for MM algorithms, but many existing designs are either problem-specific or rely on approximations and heuristics loosely inspired by the optimization literature. We propose a novel, rigorous quasi-Newton method for accelerating any valid MM algorithm, cast as seeking a fixed point of the MM \textit{algorithm map}. The method does not require specific information or computation from the objective function or its gradient and enjoys a limited-memory variant amenable to efficient computation in high-dimensional settings. By connecting our approach to Broyden's classical root-finding methods, we establish convergence guarantees and identify conditions for linear and super-linear convergence. These results are validated numerically and compared to peer methods in a thorough empirical study, showing that it achieves state-of-the-art performance across a diverse range of problems.
翻译:多数化-最小化原则(MM)为激发有效算法以解决优化问题提供了一个总体框架,但是,这些算法往往因缓慢的趋同而受到影响,特别是在大型和高维数据设置方面。这提醒人们注意专为MM算法设计的加速计划,但许多现有设计不是针对具体问题,就是依赖不完全受优化文献启发的近似值和惯性。我们提出了一个新的、严格的准Newton方法,用于加速任何有效的MM算法,投射为寻求MMM & textit {algorithm 地图的固定点。该方法并不需要目标函数或其梯度的具体信息或计算,而是拥有一种有限的微变式,可以在高维环境中有效计算。通过将我们的方法与Broyden的经典根调查方法联系起来,我们建立了趋同保证并确定线性和超级线性趋同性的条件。这些结果在彻底的经验研究中经过数字验证,并与同行方法相比较,表明它能够实现不同系列问题的状态。