We study multi-player general-sum Markov games with one of the players designated as the leader and the other players regarded as followers. In particular, we focus on the class of games where the followers are myopic, i.e., they aim to maximize their instantaneous rewards. For such a game, our goal is to find a Stackelberg-Nash equilibrium (SNE), which is a policy pair $(\pi^*, \nu^*)$ such that (i) $\pi^*$ is the optimal policy for the leader when the followers always play their best response, and (ii) $\nu^*$ is the best response policy of the followers, which is a Nash equilibrium of the followers' game induced by $\pi^*$. We develop sample-efficient reinforcement learning (RL) algorithms for solving for an SNE in both online and offline settings. Our algorithms are optimistic and pessimistic variants of least-squares value iteration, and they are readily able to incorporate function approximation tools in the setting of large state spaces. Furthermore, for the case with linear function approximation, we prove that our algorithms achieve sublinear regret and suboptimality under online and offline setups respectively. To the best of our knowledge, we establish the first provably efficient RL algorithms for solving for SNEs in general-sum Markov games with myopic followers.
翻译:我们与一个被指定为领导者和其他被视作追随者的角色一起研究多玩家的普通和马尔科夫游戏。 特别是, 我们注重追随者有近视的游戏类, 即他们的目标是最大限度地获得瞬时的奖励。 对于这样一个游戏, 我们的目标是找到一个Sckelberg- Nash 平衡( SNE), 这是一种政策配方$( pi ⁇,\ nu ⁇ ), 这样( i) $\ pi ⁇ $是领导者的最佳政策, 当追随者总是玩他们的最佳反应时, 并且 (ii) $\ nu ⁇ $ 是追随者的最佳反应政策, 也就是追随者的最佳反应政策, 也就是由 $\ pi ⁇ 导致的追随者游戏的纳什平衡。 对于这样的游戏, 我们的目标是在网上和离线设置的SNE值中, 我们的计算算法是乐观和悲观的变体, 并且他们很容易将功能近似工具纳入大州空间的设置中。 此外, 我们用直线性功能的精确的排序, 我们的算算法在在线排序下, 我们的分数的路径上, 我们的分数法将实现我们最分数的排序。