Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU) for saddle-point optimization have received growing attention due to their favorable last-iterate convergence. However, their behaviors for simple bilinear games over the probability simplex are still not fully understood - previous analysis lacks explicit convergence rates, only applies to an exponentially small learning rate, or requires additional assumptions such as the uniqueness of the optimal solution. In this work, we significantly expand the understanding of last-iterate convergence for OGDA and OMWU in the constrained setting. Specifically, for OMWU in bilinear games over the simplex, we show that when the equilibrium is unique, linear last-iterate convergence is achieved with a learning rate whose value is set to a universal constant, improving the result of (Daskalakis & Panageas, 2019b) under the same assumption. We then significantly extend the results to more general objectives and feasible sets for the projected OGDA algorithm, by introducing a sufficient condition under which OGDA exhibits concrete last-iterate convergence rates with a constant learning rate whose value only depends on the smoothness of the objective function. We show that bilinear games over any polytope satisfy this condition and OGDA converges exponentially fast even without the unique equilibrium assumption. Our condition also holds for strongly-convex-strongly-concave functions, recovering the result of (Hsieh et al., 2019). Finally, we provide experimental results to further support our theory.
翻译:最佳加速度(OGDA) 和最佳倍增重力更新(OMWU), 以优化马鞍点优化, 因其在最后一线游戏中的OMWU, 得到了越来越多的关注。 然而, 他们的简单双线游戏在概率简单x上的行为仍然没有得到完全理解。 先前的分析缺乏明确的趋同率, 只适用于指数小的学习率, 或者需要额外的假设, 如最佳解决方案的独特性。 在这项工作中, 我们大大扩大了对OGDA和OMWU在限制环境下最后的趋同率的理解。 具体地说, 对于OMWU在简单x双线游戏中的双线组合率, 我们显示, 当平衡是独特的, 线性最后一线游戏的趋同率, 其价值定在一个普遍的常数中, 改进( Daskalakis & Panageas, 2019b) 的结果。 然后, 我们大大扩展结果到更一般的目标和预测的 OGDA算法的可行方法, 引入一个足够的条件, 使OGDA甚至更具体地展示最后一线趋一致的趋同的趋一致的游戏的趋同率的游戏, 。 我们的ODAUDA(我们的任何OBODA) 的正标值只能值只能值只能值只能 。