This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL) that combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks. The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments and improves the interpretability of the learnt policies required to complete the cooperative task. The RMs associated with each sub-task are learnt in a decentralised manner and then used to guide the behaviour of each agent. By doing so, the complexity of a cooperative multi-agent problem is reduced, allowing for more effective learning. The results suggest that our approach is a promising direction for future research in MARL, especially in complex environments with large state spaces and multiple agents.
翻译:本文提出了一种新颖的多智能体强化学习 (MARL) 方法,将合作任务分解与学习奖励机器 (RMs) 编码子任务结构相结合。所提出的方法有助于处理在部分可观察环境中的奖励的非马尔可夫性质,并提高了完成合作任务所需的学习策略的可解释性。与每个子任务相关的RMs以分散的方式学习,然后用于指导每个智能体的行为。通过这样做,减少了合作多智能体问题的复杂性,从而实现更有效的学习。结果表明,我们的方法是MARL领域未来研究的一个有希望的方向,尤其是在具有大量状态空间和多个智能体的复杂环境中。