A unique challenge in Multi-Agent Reinforcement Learning (MARL) is the curse of multiagency, where the description length of the game as well as the complexity of many existing learning algorithms scale exponentially with the number of agents. While recent works successfully address this challenge under the model of tabular Markov Games, their mechanisms critically rely on the number of states being finite and small, and do not extend to practical scenarios with enormous state spaces where function approximation must be used to approximate value functions or policies. This paper presents the first line of MARL algorithms that provably resolve the curse of multiagency under function approximation. We design a new decentralized algorithm -- V-Learning with Policy Replay, which gives the first polynomial sample complexity results for learning approximate Coarse Correlated Equilibria (CCEs) of Markov Games under decentralized linear function approximation. Our algorithm always outputs Markov CCEs, and achieves an optimal rate of $\widetilde{\mathcal{O}}(\epsilon^{-2})$ for finding $\epsilon$-optimal solutions. Also, when restricted to the tabular case, our result improves over the current best decentralized result $\widetilde{\mathcal{O}}(\epsilon^{-3})$ for finding Markov CCEs. We further present an alternative algorithm -- Decentralized Optimistic Policy Mirror Descent, which finds policy-class-restricted CCEs using a polynomial number of samples. In exchange for learning a weaker version of CCEs, this algorithm applies to a wider range of problems under generic function approximation, such as linear quadratic games and MARL problems with low ''marginal'' Eluder dimension.
翻译:多机构强化学习(MARL)的一个独特挑战是多机构的诅咒。 多机构的诅咒是游戏的描述长度以及许多现有学习算法的复杂程度随代理商的数量而成倍增长。 尽管最近的工作在列表马可夫运动会模式下成功地应对了这一挑战, 但它们的机制严重依赖数量有限且规模小的国家, 并且不延伸到使用功能近似必须用于接近价值功能或政策的庞大州空间的实用假设。 本文展示了MARL算法的第一行, 从而在函数近似状态下可以解决多机构诅咒的。 我们设计了新的分散算法 -- V- 学习政策重现的V- LL- LLL, 从而在分散的线性功能下学习了马可尔科夫运动( CCE) 的首个多式复杂度。 我们的算法总是输出全局性 {O_ (\ eplonal_ 2} (\ lical- liversalalalal) 的缩略性 解算法, 在目前的列表中, 我们的平地平极性变平极性变平极性政策中, 找到了一个更低的结果。