In multi-agent reinforcement learning, the use of a global objective is a powerful tool for incentivising cooperation. Unfortunately, it is not sample-efficient to train individual agents with a global reward, because it does not necessarily correlate with an agent's individual actions. This problem can be solved by factorising the global value function into local value functions. Early work in this domain performed factorisation by conditioning local value functions purely on local information. Recently, it has been shown that providing both local information and an encoding of the global state can promote cooperative behaviour. In this paper we propose QGNN, the first value factorisation method to use a graph neural network (GNN) based model. The multi-layer message passing architecture of QGNN provides more representational complexity than models in prior work, allowing it to produce a more effective factorisation. QGNN also introduces a permutation invariant mixer which is able to match the performance of other methods, even with significantly fewer parameters. We evaluate our method against several baselines, including QMIX-Att, GraphMIX, QMIX, VDN, and hybrid architectures. Our experiments include Starcraft, the standard benchmark for credit assignment; Estimate Game, a custom environment that explicitly models inter-agent dependencies; and Coalition Structure Generation, a foundational problem with real-world applications. The results show that QGNN outperforms state-of-the-art value factorisation baselines consistently.
翻译:在多试剂加固学习中,使用全球目标是激励合作的有力工具。 不幸的是,用全球奖励来培训单个代理商并不具有抽样效率,因为它不一定与代理商的个体行动相关。 这一问题可以通过将全球价值函数纳入本地价值函数的系数来加以解决。 这一领域早期的工作通过仅以本地信息为本地价值函数设置条件来进行乘数化。 最近, 已经表明, 提供本地信息和全球国家编码可以促进合作行为。 在本文中, 我们提议了以图形神经网络为基础的第一个价值因子化方法QGNN, 这是使用图形神经网络(GNN)模型的第一个价值因子化方法。 QGNN的多层信息传递结构比先前工作中的模型更具代表性复杂性, 使其能够产生更有效的因子化功能化。 QGNNNN还引入了一种可与其它方法性能相匹配的变异混合器。 我们根据若干基线, 包括QMIX-Att, GremaMIX, QMIX, QMIX, VDN, 和混合结构的多级信息传输结构, 明确显示一个标准基建模; 我们的Staral- gind- gind- gind- 的基建模, 的模型, 显示一个标准的基建模。