This monograph covers some recent advances on a range of acceleration techniques frequently used in convex optimization. We first use quadratic optimization problems to introduce two key families of methods, momentum and nested optimization schemes, which coincide in the quadratic case to form the Chebyshev method whose complexity is analyzed using Chebyshev polynomials. We discuss momentum methods in detail, starting with the seminal work of Nesterov (1983) and structure convergence proofs using a few master templates, such as that of \emph{optimized gradient methods} which have the key benefit of showing how momentum methods maximize convergence rates. We further cover proximal acceleration techniques, at the heart of the \emph{Catalyst} and \emph{Accelerated Hybrid Proximal Extragradient} frameworks, using similar algorithmic patterns. Common acceleration techniques directly rely on the knowledge of some regularity parameters of the problem at hand, and we conclude by discussing \emph{restart} schemes, a set of simple techniques to reach nearly optimal convergence rates while adapting to unobserved regularity parameters.
翻译:本专论涵盖在康韦克斯优化中常用的一系列加速技术的最新进展。 我们首先使用二次优化问题来引入方法、 动力和嵌套优化方案的两个关键组合, 这在二次案例中恰好相同, 形成Chebyshev 方法, 其复杂性使用Chebyshev 多元分子法进行分析。 我们详细讨论动力方法, 从Nesterov (1983年) 的开创性工作开始, 以及使用少数主模板, 如 emph{ 优化梯度方法} 的结构趋同证明 。 这些主模板具有显示动力方法如何最大化趋同率的关键好处。 我们进一步涵盖在 \ emph{ Catalyst} 和\ emph{ 加速混合 Proximal Exgradient} 框架核心的准加速技术, 使用类似的算法模式。 普通加速技术直接依靠对手头问题某些常规参数的了解, 我们通过讨论\emph{ restartate} 计划, 一组简单技术可以达到接近最佳的趋同率, 同时适应未观测的常规参数 。