In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods (SMM) for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods.
翻译:在这项工作中,我们提供了一种基本统一的趋同理论,用于为一系列随机优化方法得出预期和几乎可以肯定的趋同结果。我们的统一理论仅需要核实若干具有代表性的条件,而没有根据任何具体的算法加以调整。作为一个直接应用,我们恢复了在更一般的环境中,随机调整和随机调整的预想和几乎肯定的趋同结果。此外,我们为非模拟非convex优化问题的随机近似梯度方法(prox-SGD)和基于随机模型的方法(SMM)建立了新的预期和几乎可以肯定的趋同结果。这些应用表明,我们的统一理论提供了插件型趋同分析,并为一系列广泛的随机优化方法提供了有力的趋同保证。