Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) (Mescheder et al., 2017). SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.
翻译:解决不受限制的平滑游戏最突出的两种算法是古典的随机梯度梯度下降率(SGDA)和最近推出的随机一致优化(SCO) (Mescheder等人,2017年)。 SGDA已知是特定类别游戏的固定点,但目前的趋同分析要求有一定差异的假设。上合组织成功地用于解决大规模对立问题,但其趋同保证仅限于其确定性变式。在这项工作中,我们引入了预期的共同相近性条件,解释了其好处,并在这一条件下提供了SGDA和上合组织在最后的趋同保证,以解决可能非分子的一类随机变异性不平等问题。我们证明这两种方法在使用固定步数时都与解决办法的邻里线性趋同,我们提出了有洞察的后继节规则,以保证与确切解决办法的趋同。此外,我们的趋同保证在任意抽样模式下坚持着,因此我们深入了解了微型斗争的复杂性。