We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of a \textit{convex-concave} unconstrained min-max optimization problem. Compared to their first-order counterparts, investigations of second-order methods for min-max optimization are relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we highlight how second-order information can be used to speed up the dynamics of dual extrapolation methods {despite inexactness}. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without requiring any compactness assumptions. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.
翻译:我们提出并分析精确和不精确的牛顿型方法, 以找到一个没有限制的最小最大优化问题的全球马鞍点 。 与第一个对等相比, 对二阶方法进行最小最大优化的调查相对有限, 因为获得与第二阶信息的全球趋同率涉及更多。 在本文件中, 我们强调如何使用第二阶信息来加速双级外推方法的动态 { 偏差} 。 具体地说, 我们显示, 提议的算法生成的迭代点仍然处于一个受约束的套件之内, 而平均迭代点在$O (\\\ epsilon\\\ ⁇ 2/3}) 范围内, 在一个差距函数的反复点上集中到$( \\ epsilon\ ⁇ 2/3} ($) 。 我们的算法匹配了这方面理论上确定的较低约束的数值, 我们的分析为第二阶梯方法提供了简单和直观的趋同分析, 而不需要任何紧凑的假设。 最后, 我们对合成和真实的数据进行一系列数字实验, 以证明拟议的算法的效率。