In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents. To address the above issues, we integrate distributional RL and value function factorization methods by proposing a Distributional Value Function Factorization (DFAC) framework to generalize expected value function factorization methods to their DFAC variants. DFAC extends the individual utility functions from deterministic variables to random variables, and models the quantile function of the total return as a quantile mixture. To validate DFAC, we demonstrate DFAC's ability to factorize a simple two-step matrix game with stochastic rewards and perform experiments on all Super Hard tasks of StarCraft Multi-Agent Challenge, showing that DFAC is able to outperform expected value function factorization baselines.
翻译:在全面合作的多试剂强化学习(MARL)环境中,由于每种物剂的局部可观察性和其他物剂不断改变的政策,环境是高度随机的。为了解决上述问题,我们通过提出分配值函数因子化(DFAC)框架,将分配值函数因子化(DFAC)方法与预期值函数因子化(DFAC)方法推广到其FDAC变量。DFAC将单个效用功能从确定变量扩大到随机变量,并将总回报量的定量功能作为定量混合物进行模型。为了验证DFAC,我们展示DFAC有能力将一个简单的两步矩阵游戏与随机性奖赏进行分解,并对StarCraft多重挑战的所有超硬任务进行实验,这表明DFAC能够超过预期值因子化基线。