In this paper, we investigate the power of {\it regularization}, a common technique in reinforcement learning and optimization, in solving extensive-form games (EFGs). We propose a series of new algorithms based on regularizing the payoff functions of the game, and establish a set of convergence results that strictly improve over the existing ones, with either weaker assumptions or stronger convergence guarantees. In particular, we first show that dilated optimistic mirror descent (DOMD), an efficient variant of OMD for solving EFGs, with adaptive regularization can achieve a fast $\tilde O(1/T)$ last-iterate convergence in terms of duality gap and distance to the set of Nash equilibrium (NE) without uniqueness assumption of the NE. Second, we show that regularized counterfactual regret minimization (\texttt{Reg-CFR}), with a variant of optimistic mirror descent algorithm as regret-minimizer, can achieve $O(1/T^{1/4})$ best-iterate, and $O(1/T^{3/4})$ average-iterate convergence rate for finding NE in EFGs. Finally, we show that \texttt{Reg-CFR} can achieve asymptotic last-iterate convergence, and optimal $O(1/T)$ average-iterate convergence rate, for finding the NE of perturbed EFGs, which is useful for finding approximate extensive-form perfect equilibria (EFPE). To the best of our knowledge, they constitute the first last-iterate convergence results for CFR-type algorithms, while matching the state-of-the-art average-iterate convergence rate in finding NE for non-perturbed EFGs. We also provide numerical results to corroborate the advantages of our algorithms.
翻译:在本文中,我们调查了在解决广泛形式的游戏(EFGs)中强化学习和优化的常见技术,即“校正”的功率。我们提出了一系列基于游戏报酬功能正规化的新算法。我们提出了一系列基于游戏报酬功能正规化的新算法,并建立了一套与现有结果严格改进的趋同结果,有的假设较弱,有的则更强烈的趋同保证。特别是,我们首先显示,以乐观的镜相色算法(DOMD)作为解决EFG(EFG)的高效变种,适应性调整正规化能实现快速趋同($+1/T)O(1/T)美元最后的趋同率(EFGs ) 。最后,我们显示,正统化后退率(NEFGs)中的平均趋同率(REFGs ) 也显示我们最接近的NEF-O-Oralalal-Cralal-Cal-SU) 。</s>