In this paper we present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems while requiring a simple and intuitive analysis. Similarly to the seminal work of Nemirovski and the recent approach of Piliouras et al. in normal form games, our work is based on the fact that the update rule of the Proximal Point method (PP) can be approximated up to accuracy $\epsilon$ with only $O(\log 1/\epsilon)$ additional gradient-calls through the iterations of a contraction map. Then combining the analysis of (PP) method with an error-propagation analysis we establish that the resulting first order method, called Clairvoyant Extra Gradient, admits near-optimal time-average convergence for general domains and last-iterate convergence in the unconstrained case.
翻译:在本文中,我们展示了一种第一阶方法,在需要简单和直觉分析的情况下,允许对卷轴/卷轴微轴问题采用接近最佳的趋同率。与Nemirovski的开创性工作以及Piliouras等人最近在普通游戏中的做法类似,我们的工作基于以下事实,即精度点方法(PP)的更新规则可以近似于精确度,只要通过缩缩缩图的迭代来额外调用(log 1/\ epsilon)$(O/log 1/\ epsilon)$($)的梯度。然后,将(PP)方法的分析与错误分析结合起来,我们确定由此产生的第一个测序方法,称为Clairvoyant Extra Gradient, 接受一般域的近最佳平均时间趋同,以及未受限制的案例中的最后调用率趋同。