This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL) that combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks. The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments and improves the interpretability of the learnt policies required to complete the cooperative task. The RMs associated with each sub-task are learnt in a decentralised manner and then used to guide the behaviour of each agent. By doing so, the complexity of a cooperative multi-agent problem is reduced, allowing for more effective learning. The results suggest that our approach is a promising direction for future research in MARL, especially in complex environments with large state spaces and multiple agents.
翻译:本文提出了一种新的多智能体强化学习(MARL)方法,它将协同任务分解与学习编码子任务结构的奖励机制相结合。所提出的方法有助于处理部分可观察环境中奖励的非马尔科夫性质,并提高了完成协同任务所需的学习策略的可解释性。与每个子任务相关的奖励机制是以分散的方式学习的,然后用于指导每个智能体的行为。通过这样做,协同多智能体问题的复杂度被降低,从而实现更有效的学习。结果表明,我们的方法是未来在具有大状态空间和多个智能体的复杂环境中进行MARL研究的一个有前途的方向。