Nonconvex-nonconcave minimax optimization has been the focus of intense research over the last decade due to its broad applications in machine learning and operation research. Unfortunately, most existing algorithms cannot be guaranteed to converge and always suffer from limit cycles. Their global convergence relies on certain conditions that are difficult to check, including but not limited to the global Polyak-\L{}ojasiewicz condition, the existence of a solution satisfying the weak Minty variational inequality and $\alpha$-interaction dominant condition. In this paper, we develop the first provably convergent algorithm called doubly smoothed gradient descent ascent method, which gets rid of the limit cycle without requiring any additional conditions. We further show that the algorithm has an iteration complexity of $\mathcal{O}(\epsilon^{-4})$ for finding a game stationary point, which matches the best iteration complexity of single-loop algorithms under nonconcave-concave settings. The algorithm presented here opens up a new path for designing provable algorithms for nonconvex-nonconcave minimax optimization problems.
翻译:近十年来,由于机器学习和操作研究的广泛应用,非colvex-nonconcoln-nonconcolent 微型最大最大最大最大最大最大最大最大最大最佳条件一直是集中研究的焦点。 不幸的是,大多数现有算法无法保证趋同,而且总是有限制周期。 它们的全球趋同取决于某些难以检查的条件, 包括但不限于全球Polyak-\\L ⁇ ojasiewicz条件, 一种满足微小变异性不平等和 $\alpha$-interactive主导条件的解决方案的存在。 在本文中, 我们开发了第一个称为双曲线梯度下降增益法的可察觉聚合算法, 这种方法在不需要任何附加条件的情况下消除了极限周期。 我们还进一步显示, 算法具有超常复杂性 $\mathcal{O} (\ epsilonlon%-4} 来寻找游戏固定点, 与非concolve- concolve comax massimal impress press press practimpractimpress