We develop new adaptive algorithms for variational inequalities with monotone operators, which capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Our algorithms automatically adapt to unknown problem parameters such as the smoothness and the norm of the operator, and the variance of the stochastic evaluation oracle. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings. The convergence guarantees of our algorithms improve over existing adaptive methods by a $\Omega(\sqrt{\ln T})$ factor, matching the optimal non-adaptive algorithms. Additionally, prior works require that the optimization domain is bounded. In this work, we remove this restriction and give algorithms for unbounded domains that are adaptive and universal. Our general proof techniques can be used for many variants of the algorithm using one or two operator evaluations per iteration. The classical methods based on the ExtraGradient/MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings.
翻译:我们为单调操作员的变异不平等开发了新的适应性算法,这些算法包含许多感兴趣的问题,特别是 convex 优化和 convex-concave sold sock pack point point 问题。我们的算法自动适应未知的问题参数,例如操作员的平滑和规范,以及随机评估或触觉的差异。我们显示,我们的算法是普遍的,同时在非吸附、平稳和随机设置中达到最佳的趋同率。我们算法的趋同保证通过一个$\Omega(sqrt=lnT})系数改进了现有的适应方法,与最佳非适应性算法相匹配。此外,先前的工作要求将优化域捆绑起来。在这项工作中,我们取消这一限制,为适应性和普遍性的无约束域提供算法。我们的一般证明技术可以使用一个或两个操作员的反复评价来用于许多变异的算法。基于ExtraGradient/ MirrorProx 算法的经典方法需要两个操作员对它进行这样的评价,这是许多环境中运行时间的主导因素。