Deep reinforcement learning (RL) has recently shown great promise in robotic continuous control tasks. Nevertheless, prior research in this vein center around the centralized learning setting that largely relies on the communication availability among all the components of a robot. However, agents in the real world often operate in a decentralised fashion without communication due to latency requirements, limited power budgets and safety concerns. By formulating robotic components as a system of decentralised agents, this work presents a decentralised multiagent reinforcement learning framework for continuous control. To this end, we first develop a cooperative multiagent PPO framework that allows for centralized optimisation during training and decentralised operation during execution. However, the system only receives a global reward signal which is not attributed towards each agent. To address this challenge, we further propose a generic game-theoretic credit assignment framework which computes agent-specific reward signals. Last but not least, we also incorporate a model-based RL module into our credit assignment framework, which leads to significant improvement in sample efficiency. We demonstrate the effectiveness of our framework on experimental results on Mujoco locomotion control tasks. For a demo video please visit: https://youtu.be/gFyVPm4svEY.
翻译:深入强化学习(RL)最近在机器人连续控制任务中表现出了巨大的希望。然而,先前围绕主要依赖机器人所有组成部分之间交流的集中学习环境进行的这一静脉中心研究,主要依赖机器人所有组成部分之间的交流。然而,现实世界中的代理人员往往以分散的方式运作,而由于潜伏要求、有限的电力预算和安全考虑等原因,没有进行交流。通过将机器人组成部分形成一个分散的代理系统,这项工作为持续控制提供了一个分散的多试剂强化学习框架。为此,我们首先开发了一个多试剂PPPO合作框架,允许在培训和分散操作期间实现集中优化。然而,该系统只接收一个全球奖励信号,而不是分配给每个代理。为了应对这一挑战,我们进一步提议了一个通用的游戏理论信用分配框架,用于计算代理人特有的奖励信号。最后但并非最不重要的一点是,我们还将一个基于模型的RL模块纳入我们的信用分配框架,从而显著提高样本效率。我们展示了我们在Mujoco Loco移动控制任务上实验结果的框架的有效性。用于演示视频视频访问:https://yotuev/Fgy。