We consider the problem of computing an equilibrium in a class of \textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by the equality and inequality constraints that may depend on the choices of rival players. While the asymptotic global convergence and local convergence rate of certain algorithms have been extensively investigated, the iteration complexity analysis is still in its infancy. This paper provides two first-order algorithms based on quadratic penalty method (QPM) and augmented Lagrangian method (ALM), respectively, with an accelerated mirror-prox algorithm as the solver in each inner loop. We show the nonasymptotic convergence rate for these algorithms. In particular, we establish the global convergence guarantee for solving monotone and strongly monotone NGNEPs and provide the complexity bounds expressed in terms of the number of gradient evaluations. Experimental results demonstrate the efficiency of our algorithms in practice.
翻译:我们考虑的是计算某类“Textit{nonlinear general Nash均衡问题”(NNNEPs)中的平衡问题,在这一类中,每个玩家的策略是由可能取决于竞争对手选择的平等和不平等制约来确定的。虽然对某些算法的无症状全球趋同率和地方趋同率进行了广泛调查,但迭代复杂性分析仍处于初级阶段。本文分别提供了基于二次惩罚法(QPM)和增强拉格朗吉亚方法(ALM)的两种一阶算法,以快速镜质算法作为每个内圈的解答器。我们展示了这些算法的非简单趋同率。特别是,我们为解决单体和强单体NGNEPs建立了全球趋同率的保证,并提供了以梯度评价数量表示的复杂界限。实验结果显示了我们实践中算法的效率。