Real-world cooperation often requires intensive coordination among agents simultaneously. This task has been extensively studied within the framework of cooperative multi-agent reinforcement learning (MARL), and value decomposition methods are among those cutting-edge solutions. However, traditional methods that learn the value function as a monotonic mixing of per-agent utilities cannot solve the tasks with non-monotonic returns. This hinders their application in generic scenarios. Recent methods tackle this problem from the perspective of implicit credit assignment by learning value functions with complete expressiveness or using additional structures to improve cooperation. However, they are either difficult to learn due to large joint action spaces or insufficient to capture the complicated interactions among agents which are essential to solving tasks with non-monotonic returns. To address these problems, we propose a novel explicit credit assignment method to address the non-monotonic problem. Our method, Adaptive Value decomposition with Greedy Marginal contribution (AVGM), is based on an adaptive value decomposition that learns the cooperative value of a group of dynamically changing agents. We first illustrate that the proposed value decomposition can consider the complicated interactions among agents and is feasible to learn in large-scale scenarios. Then, our method uses a greedy marginal contribution computed from the value decomposition as an individual credit to incentivize agents to learn the optimal cooperative policy. We further extend the module with an action encoder to guarantee the linear time complexity for computing the greedy marginal contribution. Experimental results demonstrate that our method achieves significant performance improvements in several non-monotonic domains.
翻译:现实世界合作往往要求代理人同时进行紧密协调。这项任务在合作性多试剂强化学习(MARL)框架内进行了广泛研究,价值分解方法属于最尖端的解决办法。然而,传统方法将价值函数作为每个代理人的单一混合体学习,无法用非热量的回报来解决任务。这妨碍了它们在一般情况下的应用。最近的方法从隐含信用分配的角度,学习完全直观的信用函数,或利用额外的结构来改进合作。然而,要么由于大型联合行动空间而难以了解,要么不足以捕捉对于用非热量回报解决任务至关重要的代理人之间的复杂互动。然而,为了解决这些问题,我们提出了一种新的明确信用分配方法,以解决非热度问题。我们的方法,即适应性价值分解与Greedy Marginal的贡献(AVGM),基于适应性价值分解,了解一个动态变化的代理人团体的合作价值。我们首先说明,拟议的价值分解可考虑代理人之间的复杂互动,不足以捕捉到对于用非热量回报来解决问题至关重要的代理人之间的复杂互动关系。我们提出了一个新的明确信用分配方法,在大规模的货币化模型中学习一种我们从微量的货币化的货币化的货币化的货币贡献,然后将一种方法,我们从一种最接近的货币化的货币化的货币化的货币化的货币化的货币化的货币化的货币化的货币化的货币化方法,从一种方法向后学会学进进进进进进进进进进进进进进进到一种方法,我们的货币化的货币化的货币化的货币化了一种方法,从一个在一种向向后进。