We address the challenge of learning factored policies in cooperative MARL scenarios. In particular, we consider the situation in which a team of agents collaborates to optimize a common cost. The goal is to obtain factored policies that determine the individual behavior of each agent so that the resulting joint policy is optimal. The main contribution of this work is the introduction of Logical Team Q-learning (LTQL). LTQL does not rely on assumptions about the environment and hence is generally applicable to any collaborative MARL scenario. We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work. We conclude the paper by providing experiments (both in the tabular and deep settings) that illustrate the claims.
翻译:我们处理在MARL合作情景中学习因素政策的挑战,特别是我们考虑一个代理团队合作优化共同成本的情况,目的是获得决定每个代理机构个人行为的有要素的政策,从而形成最佳的联合政策,这项工作的主要贡献是引入逻辑团队Q-学习(LTQL),LTQL不依赖对环境的假设,因此一般适用于任何合作的MARL情景。我们从LTQL中推断出“LTQL”作为我们在工作中采用的动态方案编制方法的随机近似。我们通过提供实验(在表格和深度环境中)来结束文件,以说明这些主张。