Solving strategic games with huge action space is a critical yet under-explored topic in economics, operations research and artificial intelligence. This paper proposes new learning algorithms for solving two-player zero-sum normal-form games where the number of pure strategies is prohibitively large. Specifically, we combine no-regret analysis from online learning with Double Oracle (DO) methods from game theory. Our method -- \emph{Online Double Oracle (ODO)} -- is provably convergent to a Nash equilibrium (NE). Most importantly, unlike normal DO methods, ODO is \emph{rationale} in the sense that each agent in ODO can exploit strategic adversary with a regret bound of $\mathcal{O}(\sqrt{T k \log(k)})$ where $k$ is not the total number of pure strategies, but rather the size of \emph{effective strategy set} that is linearly dependent on the support size of the NE. On tens of different real-world games, ODO outperforms DO, PSRO methods, and no-regret algorithms such as Multiplicative Weight Update by a significant margin, both in terms of convergence rate to a NE and average payoff against strategic adversaries.
翻译:以巨大的行动空间解决战略游戏是经济学、操作研究和人工智能方面一个关键但探索不足的主题。 本文建议采用新的学习算法来解决纯战略数量惊人庞大的双玩者零和正态游戏。 具体地说, 我们把在线学习的无正反分析与游戏理论的双甲骨( DO) 方法结合起来。 我们的方法 -- \ emph{ 在线双甲甲( ODO) 与纳什平衡( NE) 相近。 最重要的是, ODO 与正常的DO 方法不同, oDO 是 emph{ ligial }, 意思是ODO 的每个代理可以利用战略对手, 遗憾地捆绑着$\ mathcal{ O} (\\\\ qrt{ T k\ log( k)} $, 其中美元不是纯战略的总数,而是 \ emph{ 有效战略 } 的大小, 直线取决于 NEEE 的支撑大小。 。 。 。 在不同的现实游戏中, ODO eightforforforfor delfor Stal 方法上, PSerview 和O- regregildal 两种策略均值, 方法, 都以相当为等为高。