We present an optimal gradient method for smooth (possibly strongly) convex optimization. The method is optimal in the sense that its worst-case bound exactly matches the lower bound on the oracle complexity for the class of problems, meaning that no black-box first-order method can have a better worst-case guarantee without further assumptions on the class of problems at hand. The method is in some sense a generalization of the Optimized Gradient Method of Kim and Fessler (2016), and asymptotically corresponds to the Triple Momentum Method (Van Scoy et al., 2017), in the presence of strong convexity. Furthermore, the method is numerically stable to arbitrarily large condition numbers and admits a conceptually very simple proof, which involves a Lyapunov argument and a sum of two inequalities. Finally, we provide a numerical recipe for obtaining the algorithmic parameters of the method, using semidefinite programming, and illustrate that it can be used for developing other methods as well.
翻译:我们提出了一个最优化的平滑(可能强势)平滑优化梯度方法。该方法最优化,因为其最坏情况约束完全符合问题类别中最困难复杂程度的较低界限,这意味着任何黑盒第一阶方法都不可能在不进一步假设手头问题类别的情况下有更好的最坏情况保障。该方法在某种程度上是金和费斯勒最佳渐进法的概括化方法(Kim and Fessler)(2016年),并且与三峰法(Van Scoy等人,2017年)无异于三峰法(Van Scoy等人,2017年)完全吻合。此外,该方法在数字上稳定,任意使用大号条件数字,并接受概念上非常简单的证据,其中涉及Lyapunov的论据和两种不平等的总和。最后,我们提供了获得该方法算法参数的数字配方,使用半defite编程,并表明该方法也可以用于开发其他方法。