Incremental methods are widely utilized for solving finite-sum optimization problems in machine learning and signal processing. In this paper, we study a family of incremental methods -- including incremental subgradient, incremental proximal point, and incremental prox-linear methods -- for solving weakly convex optimization problems. Such a problem class covers many nonsmooth nonconvex instances that arise in engineering fields. We show that the three said incremental methods have an iteration complexity of $O(\varepsilon^{-4})$ for driving a natural stationarity measure to below $\varepsilon$. Moreover, we show that if the weakly convex function satisfies a sharpness condition, then all three incremental methods, when properly initialized and equipped with geometrically diminishing stepsizes, can achieve a local linear rate of convergence. Our work is the first to extend the convergence rate analysis of incremental methods from the nonsmooth convex regime to the weakly convex regime. Lastly, we conduct numerical experiments on the robust matrix sensing problem to illustrate the convergence performance of the three incremental methods.
翻译:为了解决机器学习和信号处理中的有限和优化问题,我们广泛使用递增方法来解决机器学习和信号处理中的有限和优化问题。在本文件中,我们研究了一系列渐进方法 -- -- 包括递增亚梯度、递增准点和递增线性方法 -- -- 来解决微软的曲线优化问题。这样的问题类涵盖了工程领域出现的许多非单向非非曲线的非曲线性案例。我们表明,上述三种递增方法的迭代复杂性为$O( varepsilon ⁇ 4} $( varepsilon) 美元,用于将自然固定度测量措施提高到低于$\varepsilon 美元。此外,我们表明,如果弱化的曲线功能满足了敏锐性条件,那么所有三种递增方法,如果适当地初始化并装备了几度递减步骤,都能够达到本地的线性趋同率。我们的工作是首先将非摩锥体系统对递增方法的递合率分析扩大到弱的系统。最后,我们对坚固的矩阵感测问题进行数字实验,以说明三种递增方法的趋同性表现。