A fundamental challenge in multiagent reinforcement learning is to learn beneficial behaviors in a shared environment with other simultaneously learning agents. In particular, each agent perceives the environment as effectively non-stationary due to the changing policies of other agents. Moreover, each agent is itself constantly learning, leading to natural non-stationarity in the distribution of experiences encountered. In this paper, we propose a novel meta-multiagent policy gradient theorem that directly accounts for the non-stationary policy dynamics inherent to multiagent learning settings. This is achieved by modeling our gradient updates to consider both an agent's own non-stationary policy dynamics and the non-stationary policy dynamics of other agents in the environment. We show that our theoretically grounded approach provides a general solution to the multiagent learning problem, which inherently comprises all key aspects of previous state of the art approaches on this topic. We test our method on a diverse suite of multiagent benchmarks and demonstrate a more efficient ability to adapt to new agents as they learn than baseline methods across the full spectrum of mixed incentive, competitive, and cooperative domains.
翻译:多试剂强化学习的一个基本挑战是在与其他同时学习的推动者共享的环境中学习有益的行为;特别是,由于其他推动者的政策变化,每个推动者认为环境实际上不是静止的。此外,每个推动者本身不断学习,导致在分配所遭遇的经验方面自然的不固定性。在本文件中,我们提出一个新的元多试剂政策梯度理论,直接说明多试剂学习环境所固有的非静止政策动态。这是通过模拟我们的梯度更新来实现的,以考虑到代理人自己的非静止政策动态和环境中其他推动者的非静止政策动态。我们表明,我们基于理论的方法为多试剂学习问题提供了一个总体解决办法,它必然包含关于这个主题的艺术方法以往状态的所有关键方面。我们用多种试剂基准来测试我们的方法,并展示一种更有效的能力来适应新剂,因为它们在混合激励、竞争和合作的所有领域学习而不是基线方法。