In multi-agent reinforcement learning, the inherent non-stationarity of the environment caused by other agents' actions posed significant difficulties for an agent to learn a good policy independently. One way to deal with non-stationarity is opponent modeling, by which the agent takes into consideration the influence of other agents' policies. Most existing work relies on predicting other agents' actions or goals, or discriminating between different policies. However, such modeling fails to capture the similarities and differences between policies simultaneously and thus cannot provide enough useful information when generalizing to unseen agents. To address this, we propose a general method to learn representations of other agents' policies, such that the distance between policies is deliberately reflected by the distance between representations, while the policy distance is inferred from the sampled joint action distributions during training. We empirically show that the agent conditioned on the learned policy representation can well generalize to unseen agents in three multi-agent tasks.
翻译:在多试剂强化学习中,其他代理人行为造成的环境固有的不常态性给代理人独立学习良好政策带来了严重困难。对付不常态的一种办法是对手模式,代理人根据这种模式考虑其他代理人政策的影响。多数现有工作依赖于预测其他代理人的行动或目标,或对不同政策加以区分。然而,这种模式未能同时捕捉政策之间的异同,因此在向看不见代理人概括时无法提供足够的有用信息。为了解决这个问题,我们提出了一个了解其他代理人政策表述的一般方法,即政策之间的距离有意识地反映在代表之间的距离上,而政策距离则从培训期间抽样联合行动分布中推断出来。我们从经验上表明,以学习的政策代表为条件的代理人可以在三项多剂任务中向看不见代理人概括。