In this paper we present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems while requiring a simple and intuitive analysis. Similarly to the seminal work of Nemirovski and the recent approach of Piliouras et al. in normal form games, our work is based on the fact that the update rule of the Proximal Point method (PP) can be approximated up to accuracy $\epsilon$ with only $\mathcal{O}(\log 1/\epsilon)$ additional gradient-calls through the iterations of a contraction map. Then combining the analysis of (PP) method with an error-propagation analysis we establish that the resulting first order method, called \textit{Clairvoyant Extra Gradient}, admits near-optimal time-average convergence for general domains and last-iterate convergence in the unconstrained case.
翻译:在本文中,我们展示了一种第一阶方法,在需要简单和直觉分析的情况下,可以接受近于最佳的 convex/ concave min-max 问题趋同率,而同时又需要简单和直觉的分析。与Nemirovski 的开创性工作以及Piliouras 等人最近在普通游戏中的做法类似,我们的工作基于这样一个事实,即Proximal Point 方法(PP) 的更新规则可以近似于精确度,只要通过收缩图的迭代,只允许使用$\mathcal{O}(log 1/\epsilon)$的额外梯度调用。然后将(PP) 方法的分析与错误分析结合起来,我们确定由此产生的第一个排序方法,称为\ textit{Clairvoyant Extra Gradient}, 承认一般域接近最佳平均时间的趋同,以及未受限制的案例中的最后一度趋同。