Multi-agent reinforcement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best-response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibria in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of \emph{sample complexity} for model-based MARL algorithms in general-sum Markov games. We show two results. We first use Hoeffding inequality based bounds to show that $\tilde{\mathcal{O}}( (1-\gamma)^{-4} \alpha^{-2})$ samples per state-action pair are sufficient to obtain a $\alpha$-approximate Markov perfect equilibrium with high probability, where $\gamma$ is the discount factor, and the $\tilde{\mathcal{O}}(\cdot)$ notation hides logarithmic terms. We then use Bernstein inequality based bounds to show that $\tilde{\mathcal{O}}( (1-\gamma)^{-1} \alpha^{-2} )$ samples are sufficient. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example.
翻译:多试剂强化学习(MARL)通常使用Markov游戏(也称为随机游戏或动态游戏)的框架来建模。MARL的大多数现有文献都集中在零和马可夫游戏上,但不适用于一般和马可夫游戏。众所周知,一般和马可夫游戏中的最佳反应动态并不是缩缩缩。因此,一般和马可夫游戏中不同的平衡动态可能具有不同的值。此外,Q功能不足以完全描述平衡。鉴于这些挑战,模型学习是通用马可夫游戏中MARL的吸引性方法。在本文件中,我们调查基于模型的马可夫游戏中基于马可夫游戏中的最佳响应动态的根本问题。我们首先使用基于不平等的框来显示$tildecal{O}((1-gamma)__(1-crime)_BAR_(_BAR_BAR__BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR___________________我们_我们_______BAR_BAR_BAR_BAR__我们_我们_我们))) 运,我们,我们,我们,我们,我们,我们,我们使用基于的不平等的不平等的基的基,我们_BAR_BAR_BAR_BAR_BAR_我们