Nash equilibrium is a central concept in game theory. Several Nash solvers exist, yet none scale to normal-form games with many actions and many players, especially those with payoff tensors too big to be stored in memory. In this work, we propose an approach that iteratively improves an approximation to a Nash equilibrium through joint play. It accomplishes this by tracing a previously established homotopy that defines a continuum of equilibria for the game regularized with decaying levels of entropy. This continuum asymptotically approaches the limiting logit equilibrium, proven by McKelvey and Palfrey (1995) to be unique in almost all games, thereby partially circumventing the well-known equilibrium selection problem of many-player games. To encourage iterates to remain near this path, we efficiently minimize average deviation incentive via stochastic gradient descent, intelligently sampling entries in the payoff tensor as needed. Monte Carlo estimates of the stochastic gradient from joint play are biased due to the appearance of a nonlinear max operator in the objective, so we introduce additional innovations to the algorithm to alleviate gradient bias. The descent process can also be viewed as repeatedly constructing and reacting to a polymatrix approximation to the game. In these ways, our proposed approach, average deviation incentive descent with adaptive sampling (ADIDAS), is most similar to three classical approaches, namely homotopy-type, Lyapunov, and iterative polymatrix solvers. The lack of local convergence guarantees for biased gradient descent prevents guaranteed convergence to Nash, however, we demonstrate through extensive experiments the ability of this approach to approximate a unique Nash in normal-form games with as many as seven players and twenty one actions (several billion outcomes) that are orders of magnitude larger than those possible with prior algorithms.
翻译:纳什平衡是游戏理论中的核心概念。 有一些纳什解决者存在, 但对于普通游戏来说, 却没有任何规模, 有许多动作和许多玩家, 尤其是那些付出的加压过大, 以至于无法存储在记忆中。 在这项工作中, 我们提出一种方法, 通过联合游戏, 迭代地改善纳什平衡近似。 我们通过追踪一个先前建立的同质结构, 定义游戏的平衡连续体, 并随着变化的变化程度来规范游戏。 这个连续的无序式平流方法, 几乎在所有游戏中, 麦基尔维和帕弗雷(1995) 都证明, 其与普通的趋同性游戏不同, 从而部分绕过许多玩家游戏中众所周知的均衡选择问题。 为了鼓励它继续靠近这条路径, 我们有效地将平均偏差动力降到最低点, 蒙特卡洛对联合游戏中变异性梯度梯度的估算值偏差, 因为在目标中出现一个非线性通缩式最高操作者, 因此我们引入更多的创新算法 来减轻渐渐变的渐渐渐渐渐渐渐渐变的渐渐渐渐渐渐渐渐渐变的轨道, 。 最经常地,, 最常地, 最常地, 最常的Sil化的方法可以被看到地展示到最接近于是反复地演化的, 。