We consider the problem of computing an equilibrium in a class of \textit{nonlinear generalized Nash equilibrium problems (NGNEPs)} in which the strategy sets for each player are defined by equality and inequality constraints that may depend on the choices of rival players. While the asymptotic global convergence and local convergence rates of algorithms to solve this problem have been extensively investigated, the analysis of nonasymptotic iteration complexity is still in its infancy. This paper presents two first-order algorithms -- based on the quadratic penalty method (QPM) and augmented Lagrangian method (ALM), respectively -- with an accelerated mirror-prox algorithm as the solver in each inner loop. We establish a global convergence guarantee for solving monotone and strongly monotone NGNEPs and provide nonasymptotic complexity bounds expressed in terms of the number of gradient evaluations. Experimental results demonstrate the efficiency of our algorithms in practice.
翻译:我们考虑了在某类“纯净”/非线性普遍纳什均衡问题(NNNEPs)中计算平衡的问题,在这一类中,每个玩家的策略是由平等和不平等的限制来界定的,这些限制可能取决于对手的选择。虽然对解决该问题的算法的无症状全球趋同率和地方趋同率进行了广泛的调查,但对非无症状迭代复杂性的分析仍处于初级阶段。本文介绍了两种第一阶算法 -- -- 分别以四角罚款法(QPM)为基础,并扩大了Lagrangian法(ALM) -- -- 以快速镜光谱算法作为每个内部循环的解答器。我们为解决单体酮和强单体酮NGNEPs建立了一种全球趋同保证,并提供梯度评价数量所显示的非症状复杂性界限。实验结果显示了我们实践中的算法的效率。