Quasi-Newton algorithms are among the most popular iterative methods for solving unconstrained minimization problems, largely due to their favorable superlinear convergence property. However, existing results for these algorithms are limited as they provide either (i) a global convergence guarantee with an asymptotic superlinear convergence rate, or (ii) a local non-asymptotic superlinear rate for the case that the initial point and the initial Hessian approximation are chosen properly. Furthermore, these results are not composable, since when the iterates of the globally convergent methods reach the region of local superlinear convergence, it cannot be guaranteed the Hessian approximation matrix will satisfy the required conditions for a non-asymptotic local superlienar convergence rate. In this paper, we close this gap and present the first globally convergent quasi-Newton method with an explicit non-asymptotic superlinear convergence rate. Unlike classical quasi-Newton methods, we build our algorithm upon the hybrid proximal extragradient method and propose a novel online learning framework for updating the Hessian approximation matrices. Specifically, guided by the convergence analysis, we formulate the Hessian approximation update as an online convex optimization problem in the space of matrices, and relate the bounded regret of the online problem to the superlinear convergence of our method.
翻译:Qasi-Newton算法是解决不受限制的最小化问题最受欢迎的迭代方法之一,这主要是因为其有利于超线性趋同特性。然而,这些算法的现有结果有限,因为它们提供了(一) 一种全球趋同保证,具有无症状超线性趋同率,或(二) 一种当地非零线性超线性超线性比率,用于正确选择初始点和首个海珊近似接近率。此外,这些结果不易作比较,因为当全球趋同方法的迭代方法到达当地超线性趋同区域时,就无法保证海珊近似矩阵将满足非零线性超线性超线性趋同率所需的条件。在本文中,我们缩小了这一差距,并以明确的非线性极性超线性超线性趋同率率率计算出第一个全球趋同的准方法。与典型的准纽斯顿方法不同,我们用混合的极性超度方法建立我们的算法,并提议一个新型的在线学习框架,用于更新海珊的超级趋同式空间趋同度矩阵,通过我们最接近性的最精确的深度分析。