We study the problem of designing autonomous agents that can learn to cooperate effectively with a potentially suboptimal partner while having no access to the joint reward function. This problem is modeled as a cooperative episodic two-agent Markov decision process. We assume control over only the first of the two agents in a Stackelberg formulation of the game, where the second agent is acting so as to maximise expected utility given the first agent's policy. How should the first agent act in order to learn the joint reward function as quickly as possible and so that the joint policy is as close to optimal as possible? We analyse how knowledge about the reward function can be gained in this interactive two-agent scenario. We show that when the learning agent's policies have a significant effect on the transition function, the reward function can be learned efficiently.
翻译:我们研究设计自主代理商的问题,这些代理商能够学会如何在无法取得联合奖励功能的情况下与潜在不最优的合作伙伴进行有效合作。这个问题被仿照一个合作的分级双代理商Markov决定程序。我们在Stackelberg的游戏配方中只控制了两个代理商中的第一个,因为第二代理商正在根据第一个代理商的政策最大限度地发挥预期的作用。第一代理商应当如何行动,以便尽快学习联合奖励功能,使联合政策尽可能接近最佳?我们分析如何在这一互动的双代理商情景中获得关于奖励功能的知识。我们表明,当学习代理商的政策对过渡功能产生重大影响时,奖励功能是可以有效地学习的。