We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.
翻译:我们建议采用随机差异减少的算法来解决凸起点问题、单色调变异和单色包容问题。 我们的框架适用于Euclidean 和 Bregman 的超级、前向前向前向和前向反向反向配置方法。 所有建议的方法都与其确定性对应方法完全一致,它们要么匹配,要么改进解决结构化微轴问题最著名的复杂方法。 我们的结果强化了差异性不平等和最小化差异的对应性。 我们还展示了我们在矩阵游戏上进行数字评估的方法的改进。