Learning from datasets without interaction with environments (Offline Learning) is an essential step to apply Reinforcement Learning (RL) algorithms in real-world scenarios. However, compared with the single-agent counterpart, offline multi-agent RL introduces more agents with the larger state and action space, which is more challenging but attracts little attention. We demonstrate current offline RL algorithms are ineffective in multi-agent systems due to the accumulated extrapolation error. In this paper, we propose a novel offline RL algorithm, named Implicit Constraint Q-learning (ICQ), which effectively alleviates the extrapolation error by only trusting the state-action pairs given in the dataset for value estimation. Moreover, we extend ICQ to multi-agent tasks by decomposing the joint-policy under the implicit constraint. Experimental results demonstrate that the extrapolation error is reduced to almost zero and insensitive to the number of agents. We further show that ICQ achieves the state-of-the-art performance in the challenging multi-agent offline tasks (StarCraft II).
翻译:在不与环境互动的情况下从数据集中学习(脱线学习)是应用强化学习算法(RL)在现实世界情景中的一个重要步骤。然而,与单一试剂对应方相比,离线多试剂RL引入了更多具有较大状态和行动空间的代理,这更具挑战性,但很少引起注意。我们展示了当前离线RL算法在多试剂系统中由于累积的外推错误而无效。在本文件中,我们提议了一个新的离线RL算法,名为“隐性约束 Q- 学习 ” (ICQ ),该算法仅信任数值估算数据集中给出的州-行动对方,从而有效地减轻了外推错误。此外,我们通过在隐含的限制下将联合政策分解,将ICQ扩大到多试剂任务。实验结果表明,外推错误已降至几乎为零,而且对代理人的数量不敏感。我们进一步显示,ICQ在具有挑战性的多试管离线任务(StarCft II)中实现了最先进的业绩。