Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are one of the most efficient algorithms for solving large-scale min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, despite their undoubted popularity, current convergence analyses of SPEG and SOG require a bounded variance assumption. In addition, several important questions regarding the convergence properties of these methods are still open, including mini-batching, efficient step-size selection, and convergence guarantees under different sampling strategies. In this work, we address these questions and provide convergence guarantees for two large classes of structured non-monotone VIPs: (i) quasi-strongly monotone problems (a generalization of strongly monotone problems) and (ii) weak Minty variational inequalities (a generalization of monotone and Minty VIPs). We introduce the expected residual condition, explain its benefits, and show how it can be used to obtain a strictly weaker bound than previously used growth conditions, expected co-coercivity, or bounded variance assumptions. Equipped with this condition, we provide theoretical guarantees for the convergence of single-call extragradient methods for different step-size selections, including constant, decreasing, and step-size-switching rules. Furthermore, our convergence analysis holds under the arbitrary sampling paradigm, which includes importance sampling and various mini-batching strategies as special cases.
翻译:单调随机超常变化方法,如随机过去超常(SPEG)和随机乐观梯度(SOG)等,近年来引起了很大的兴趣,是解决大型微轴优化和差异不平等问题的最有效算法之一,出现在各种机器学习任务中。然而,尽管它们无疑受到欢迎,但目前对SPEG和SOG的趋同分析要求有一定差异的假设。此外,关于这些方法趋同特性的若干重要问题仍然悬而未决,包括微型吸附、高效的分级选择和不同取样战略下的趋同保证。在这项工作中,我们处理这些问题并为两大类结构化非分子贵宾提供趋同保证:(一) 准强大的单调问题(强烈单调问题普遍化) 和(二) Minty 差异分析较弱(一阶级和小调等) 。我们引入了预期的残余条件,解释了其好处,并展示了如何利用它来获得严格约束性强的标准化的定位模式,在不同的递合级规则下,为我们使用的单一级的定序的定序选择方法提供了我们所预期的分级的分级的分级的分级选择。</s>