Breakthrough advances in reinforcement learning (RL) research have led to a surge in the development and application of RL. To support the field and its rapid growth, several frameworks have emerged that aim to help the community more easily build effective and scalable agents. However, very few of these frameworks exclusively support multi-agent RL (MARL), an increasingly active field in itself, concerned with decentralised decision-making problems. In this work, we attempt to fill this gap by presenting Mava: a research framework specifically designed for building scalable MARL systems. Mava provides useful components, abstractions, utilities and tools for MARL and allows for simple scaling for multi-process system training and execution, while providing a high level of flexibility and composability. Mava is built on top of DeepMind's Acme \citep{hoffman2020acme}, and therefore integrates with, and greatly benefits from, a wide range of already existing single-agent RL components made available in Acme. Several MARL baseline systems have already been implemented in Mava. These implementations serve as examples showcasing Mava's reusable features, such as interchangeable system architectures, communication and mixing modules. Furthermore, these implementations allow existing MARL algorithms to be easily reproduced and extended. We provide experimental results for these implementations on a wide range of multi-agent environments and highlight the benefits of distributed system training.
翻译:在加强学习(RL)研究的突破性进展方面,强化学习(RL)研究的突破性进展导致研发和应用RL的激增。 为了支持外地及其快速增长,出现了若干框架,目的是帮助社区更方便地建立有效和可扩展的代理机构。然而,这些框架中只有极少数专门支持多试剂RL(MARL)的框架,它本身是一个日益活跃的领域,涉及分散的决策问题。在这项工作中,我们试图通过介绍Mava(Mava):一个专门设计用于建立可扩缩的MAL系统的研究框架。Mava(MARL)为MOL提供了有用的组成部分、抽象、公用事业和工具,并为多进程系统培训和执行提供了简单的规模,同时提供了高度的灵活性和可变性。Mava(Mava)在DeepMeld's Acme\citep{hoffman20acme}(MARL)的顶端上建,因此与Ava(Acme)现有各种单一试剂(RL)组件的庞大范围融合性组成部分相结合。MARL(MAR)基线系统已经在Mava(Mava)中实施这些系统可以作为例子,这些可轻易地展示Mava(Mava)的交流和可复制的系统,这些可复制的模型,这些可复制的可复制的可复制的模型,提供了这些可复制性结构。