Achieving and maintaining cooperation between agents to accomplish a common objective is one of the central goals of Multi-Agent Reinforcement Learning (MARL). Nevertheless in many real-world scenarios, separately trained and specialized agents are deployed into a shared environment, or the environment requires multiple objectives to be achieved by different coexisting parties. These variations among specialties and objectives are likely to cause mixed motives that eventually result in a social dilemma where all the parties are at a loss. In order to resolve this issue, we propose the Incentive Q-Flow (IQ-Flow) algorithm, which modifies the system's reward setup with an incentive regulator agent such that the cooperative policy also corresponds to the self-interested policy for the agents. Unlike the existing methods that learn to incentivize self-interested agents, IQ-Flow does not make any assumptions about agents' policies or learning algorithms, which enables the generalization of the developed framework to a wider array of applications. IQ-Flow performs an offline evaluation of the optimality of the learned policies using the data provided by other agents to determine cooperative and self-interested policies. Next, IQ-Flow uses meta-gradient learning to estimate how policy evaluation changes according to given incentives and modifies the incentive such that the greedy policy for cooperative objective and self-interested objective yield the same actions. We present the operational characteristics of IQ-Flow in Iterated Matrix Games. We demonstrate that IQ-Flow outperforms the state-of-the-art incentive design algorithm in Escape Room and 2-Player Cleanup environments. We further demonstrate that the pretrained IQ-Flow mechanism significantly outperforms the performance of the shared reward setup in the 2-Player Cleanup environment.
翻译:实现和维持代理人之间的合作以实现共同目标是多机构强化学习的核心目标之一。 然而,在许多现实世界情景中,单独培训和专门代理人被部署到一个共同的环境,或环境要求不同共存的当事方实现多种目标。这些特殊性和目标之间的差异可能造成混合动机,最终导致各方都处于损失之中的社会困境。为了解决这个问题,我们建议采用激励Q-Flow(IQ-Peria-Flow)算法,该算法用一个激励调节剂来改变系统的奖赏设置,使合作政策也符合对代理人的自我利益政策。不同于现有方法,即学会激励自己感兴趣的代理人,IQ-Flow不会对代理人的政策或学习算法做出任何假设,从而使得所有当事方都处于亏损之中,从而能够将已开发的框架推广到更广泛的应用范围。IQ-Flow(IQ-Flow)算法用其他代理人提供的数据来改变系统的报酬设置,从而使得合作和自我感兴趣的政策也符合2级的自我利益政策。IQ-Flow-Flow(IQ-F) 与现有激励机制的自我评估展示了目前合作性环境的自我变化。我们正在展示了正确的政策评价。IQ-Q-IQ-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-</s>