The problem of finding near-stationary points in convex optimization has not been adequately studied yet, unlike other optimality measures such as minimizing function value. Even in the deterministic case, the optimal method (OGM-G, due to Kim and Fessler (2021)) has just been discovered recently. In this work, we conduct a systematic study of the algorithmic techniques in finding near-stationary points of convex finite-sums. Our main contributions are several algorithmic discoveries: (1) we discover a memory-saving variant of OGM-G based on the performance estimation problem approach (Drori and Teboulle, 2014); (2) we design a new accelerated SVRG variant that can simultaneously achieve fast rates for both minimizing gradient norm and function value; (3) we propose an adaptively regularized accelerated SVRG variant, which does not require the knowledge of some unknown initial constants and achieves near-optimal complexities. We put an emphasis on the simplicity and practicality of the new schemes, which could facilitate future developments.
翻译:与尽量减少功能价值等其他最佳措施不同,目前尚未充分研究在凝固优化中寻找近常态点的问题。即使在决定性的情况下,最近也刚刚发现了最佳方法(金和费斯勒(2021年)的GGOM-G),在这项工作中,我们系统地研究了寻找近常态点的Convex定数的算法技术。我们的主要贡献是若干算法发现:(1) 我们发现了基于业绩估计问题方法的OGM-G的记忆保存变体(Drori和Teboulle,2014年);(2) 我们设计了新的加速SVRG变体,既可以同时达到加速率以尽量减少梯度规范,也可以达到功能价值;(3) 我们建议采用适应性正常的加速SVRG变体,这不需要了解一些未知的初步常数,而且能够实现近最佳的复杂性。我们强调新办法的简单性和实用性,这可以促进未来的发展。