Mean field control (MFC) is an effective way to mitigate the curse of dimensionality of cooperative multi-agent reinforcement learning (MARL) problems. This work considers a collection of $N_{\mathrm{pop}}$ heterogeneous agents that can be segregated into $K$ classes such that the $k$-th class contains $N_k$ homogeneous agents. We aim to prove approximation guarantees of the MARL problem for this heterogeneous system by its corresponding MFC problem. We consider three scenarios where the reward and transition dynamics of all agents are respectively taken to be functions of $(1)$ joint state and action distributions across all classes, $(2)$ individual distributions of each class, and $(3)$ marginal distributions of the entire population. We show that, in these cases, the $K$-class MARL problem can be approximated by MFC with errors given as $e_1=\mathcal{O}(\frac{\sqrt{|\mathcal{X}||\mathcal{U}|}}{N_{\mathrm{pop}}}\sum_{k}\sqrt{N_k})$, $e_2=\mathcal{O}(\sqrt{|\mathcal{X}||\mathcal{U}|}\sum_{k}\frac{1}{\sqrt{N_k}})$ and $e_3=\mathcal{O}\left(\sqrt{|\mathcal{X}||\mathcal{U}|}\left[\frac{A}{N_{\mathrm{pop}}}\sum_{k\in[K]}\sqrt{N_k}+\frac{B}{\sqrt{N_{\mathrm{pop}}}}\right]\right)$, respectively, where $A, B$ are some constants and $|\mathcal{X}|,|\mathcal{U}|$ are the sizes of state and action spaces of each agent. Finally, we design a Natural Policy Gradient (NPG) based algorithm that, in the three cases stated above, can converge to an optimal MARL policy within $\mathcal{O}(e_j)$ error with a sample complexity of $\mathcal{O}(e_j^{-3})$, $j\in\{1,2,3\}$, respectively.
翻译:平均字段控制( MFC) 是减轻合作多剂强化学习( MARL) 问题维度诅咒的有效方法 。 这项工作考虑的是将 $N\ mathrm{po} 美元混合剂分为 $K 类, 这样, $K 美元等级包含 $n_ k$ 平质剂 。 我们的目标是通过相应的 MFC 问题来证明 MARL 问题的近似保证 。 我们考虑三种方案, 其中所有代理人的奖赏和过渡动态分别被设定为 $( 美元) 美元 共 美元 、 每类 $(2) 美元 单个分配 美元 和整个人口的边际分配 。 我们显示, 在这些情况下, 美元- 美元 类的MAL 问题可以被 MFC 以 $_ 1\ mathal{ O} ( broc krq kr=qrá_ rock} aq_ qrq_ rock_ rass aqr_ r_ r_ r_ r_ r_ r_ r_ ma_ r_ r_ r_ r_ r_ ma_ r_ ma_ r_ r_ r_ ma_ r_ r_ ma_ r_ r_ * ar_\\\\ r_ r_ r_ r_ ma_ r_ r_ r_ r_ r_ r_\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\