This paper presents a new regularization approach -- termed OpReg-Boost -- to boost the convergence and lessen the asymptotic error of online optimization and learning algorithms. In particular, the paper considers online algorithms for optimization problems with a time-varying (weakly) convex composite cost. For a given online algorithm, OpReg-Boost learns the closest algorithmic map that yields linear convergence; to this end, the learning procedure hinges on the concept of operator regression. We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver that exploits a closed-form solution of simple quadratically-constrained quadratic programs (QCQPs). Simulation results showcase the superior properties of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and Anderson acceleration, and with respect to its close relative convex-regression-boost (CvxReg-Boost) which is also novel but less performing.
翻译:本文介绍了一种新的正规化方法 -- -- 称为OpReg-Boost -- -- 以推进在线优化和学习算法的趋同和减少无症状错误。 特别是, 本文考虑了时间( 微弱) 二次组合成本的优化问题的在线算法。 对于给定的在线算法, Opreg- Boost 学习最接近的算法图, 从而得出线性趋同; 为此, 学习程序取决于操作员回归的概念。 我们展示了如何将操作员回归问题正规化, 并提议一个计算高效的 Peaceman- Rachford 解算器, 该解算出一个简单的四重四重方形程序( QCQPs) 的封闭式解决方案。 模拟结果展示了 Opreg- Boost w.r. t. 更经典的前向加速算法、 FISTA 和 Anderson 加速, 及其近相向 convex- Regresion- oprest ( CvxReg- Boost) 也具有创新但表现较少。