In multi-agent reinforcement learning, the inherent non-stationarity of the environment caused by other agents' actions posed significant difficulties for an agent to learn a good policy independently. One way to deal with non-stationarity is agent modeling, by which the agent takes into consideration the influence of other agents' policies. Most existing work relies on predicting other agents' actions or goals, or discriminating between their policies. However, such modeling fails to capture the similarities and differences between policies simultaneously and thus cannot provide useful information when generalizing to unseen policies. To address this, we propose a general method to learn representations of other agents' policies via the joint-action distributions sampled in interactions. The similarities and differences between policies are naturally captured by the policy distance inferred from the joint-action distributions and deliberately reflected in the learned representations. Agents conditioned on the policy representations can well generalize to unseen agents. We empirically demonstrate that our method outperforms existing work in multi-agent tasks when facing unseen agents.
翻译:在多试剂强化学习中,其他代理人行为引起的环境固有的不常态性给代理人独立学习良好政策带来了严重困难。处理不常态性的一种办法是代理模型,该代理人考虑其他代理人政策的影响。多数现有工作依赖于预测其他代理人的行动或目标,或对其政策加以区别。然而,这种模型未能同时捕捉政策之间的异同,因此在概括到无形政策时无法提供有用的信息。为了解决这个问题,我们提出了一个一般方法,通过在互动中抽样的联合行动分布来了解其他代理人政策的表现。政策之间的异同自然地通过从联合行动分布中推断出的政策距离来捕捉到,并故意反映在所学的表述中。以政策表述为条件的代理人可以向无形代理人概括。我们的经验证明,我们的方法在面对看不见的代理人时,比现有的多剂工作要好。