Regret minimization has proved to be a versatile tool for tree-form sequential decision making and extensive-form games. In large two-player zero-sum imperfect-information games, modern extensions of counterfactual regret minimization (CFR) are currently the practical state of the art for computing a Nash equilibrium. Most regret-minimization algorithms for tree-form sequential decision making, including CFR, require (i) an exact model of the player's decision nodes, observation nodes, and how they are linked, and (ii) full knowledge, at all times t, about the payoffs -- even in parts of the decision space that are not encountered at time t. Recently, there has been growing interest towards relaxing some of those restrictions and making regret minimization applicable to settings for which reinforcement learning methods have traditionally been used -- for example, those in which only black-box access to the environment is available. We give the first, to our knowledge, regret-minimization algorithm that guarantees sublinear regret with high probability even when requirement (i) -- and thus also (ii) -- is dropped. We formalize an online learning setting in which the strategy space is not known to the agent and gets revealed incrementally whenever the agent encounters new decision points. We give an efficient algorithm that achieves $O(T^{3/4})$ regret with high probability for that setting, even when the agent faces an adversarial environment. Our experiments show it significantly outperforms the prior algorithms for the problem, which do not have such guarantees. It can be used in any application for which regret minimization is useful: approximating Nash equilibrium or quantal response equilibrium, approximating coarse correlated equilibrium in multi-player games, learning a best response, learning safe opponent exploitation, and online play against an unknown opponent/environment.
翻译:最遗憾最小化的最小化算法已被证明是树形连续决策以及广泛组合游戏的多种工具。 在大型双玩者零和不完善的信息游戏中, 现代反事实最小化(CFR)是目前计算纳什均衡的实用状态。 最遗憾最小化的树形连续决策算法, 包括CFR, 需要 (一) 精确的游戏游戏决定节点、 观察节点以及它们是如何连接的模型, 以及 (二) 在任何时候, 都充分了解收益 -- 即使是在不时遇到的决策空间的一部分。 最近, 人们越来越有兴趣放松某些限制, 并且将遗憾最小化的最小化适用于传统上使用强化学习方法的设置。 例如,那些只有黑箱进入环境的选项, 需要首先, 遗憾最小化算法, 保证子线下对高概率的遗憾, 即使需要(一), 也能够(二) 安全性地(二) 也降低了。 我们正式的在线学习设置, 当我们的战略在使用高级智能时, 之前, 我们的动力会显示一个快速的概率, 。