In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value. Even in the deterministic case, the optimal method (OGM-G, due to Kim and Fessler (2021)) has just been discovered recently. In this work, we conduct a systematic study of algorithmic techniques for finding near-stationary points of convex finite-sums. Our main contributions are several algorithmic discoveries: (1) we discover a memory-saving variant of OGM-G based on the performance estimation problem approach (Drori and Teboulle, 2014); (2) we design a new accelerated SVRG variant that can simultaneously achieve fast rates for minimizing both the gradient norm and function value; (3) we propose an adaptively regularized accelerated SVRG variant, which does not require the knowledge of some unknown initial constants and achieves near-optimal complexities. We put an emphasis on the simplicity and practicality of the new schemes, which could facilitate future work.
翻译:在同步优化方面,尚未充分研究寻找近静止点的问题,这与功能值等其他最佳措施不同。即使是在确定性的情况下,最近刚刚发现了最佳方法(金和费斯勒(2021年)的OGM-G),在这项工作中,我们系统地研究了寻找近静止点的算法技术,以找到近静止点的定数定数。我们的主要贡献是若干算法发现:(1) 我们发现了基于性能估计问题方法的OGM-G的记忆保存变体(Drori和Teboulle,2014年);(2) 我们设计了新的加速的SVRG变体,可以同时达到加速率,以尽量减少梯度规范值和功能值;(3) 我们建议采用适应性地正规化的加速 SVRG变体,这不需要了解一些未知的初步常数,而是实现近于最佳的复杂情况。我们强调新办法的简单性和实用性,这可能有助于今后的工作。