We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, composite optimization problems. The proposed stochastic proximal point algorithm incorporates a variance reduction mechanism and the resulting SPP updates are solved using an inexact semismooth Newton framework. We establish detailed convergence results that take the inexactness of the SPP steps into account and that are in accordance with existing convergence guarantees of (proximal) stochastic variance-reduced gradient methods. Numerical experiments show that the proposed algorithm competes favorably with other state-of-the-art methods and achieves higher robustness with respect to the step size selection.
翻译:我们为一组微弱的锥形、复合优化问题开发了一种可执行的随机点方法。提议的随机准点算法包含一个减少差异机制,由此产生的SPP更新则通过一个不精确的半斯mooth 牛顿框架解决。我们建立了详细的趋同结果,考虑到SPP步骤的不精确性,并且符合(准的)随机变异梯度降低方法的现有趋同保证。数字实验表明,拟议的算法与其他最先进的方法竞争有利,并在步骤大小的选择方面达到更高的稳健性。