This paper introduces Greedy UnMix (GUM) for cooperative multi-agent reinforcement learning (MARL). Greedy UnMix aims to avoid scenarios where MARL methods fail due to overestimation of values as part of the large joint state-action space. It aims to address this through a conservative Q-learning approach through restricting the state-marginal in the dataset to avoid unobserved joint state action spaces, whilst concurrently attempting to unmix or simplify the problem space under the centralized training with decentralized execution paradigm. We demonstrate the adherence to Q-function lower bounds in the Q-learning for MARL scenarios, and demonstrate superior performance to existing Q-learning MARL approaches as well as more general MARL algorithms over a set of benchmark MARL tasks, despite its relative simplicity compared with state-of-the-art approaches.
翻译:本文件介绍了合作性多试剂强化学习的贪婪UnMix(GUM) 。贪婪UnMix(GUM) 旨在避免MARL方法由于高估作为大型联合州-行动空间一部分的价值而失败的情景,目的是通过保守的Q-学习方法解决这一问题,方法是限制数据集中的国家边际以避免未观测到的联合国家行动空间,同时试图在集中培训中以分散执行模式统一或简化问题空间。我们表明在MARL情景的Q学习中坚持低功能界限,并展示了现有Q学习MARL方法的优异性,以及在一组基准MARL任务上更一般的MARL算法,尽管与最新方法相比相对简单。