Many recent breakthroughs in multi-agent reinforcement learning (MARL) require the use of deep neural networks, which are challenging for human experts to interpret and understand. On the other hand, existing work on interpretable reinforcement learning (RL) has shown promise in extracting more interpretable decision tree-based policies from neural networks, but only in the single-agent setting. To fill this gap, we propose the first set of algorithms that extract interpretable decision-tree policies from neural networks trained with MARL. The first algorithm, IVIPER, extends VIPER, a recent method for single-agent interpretable RL, to the multi-agent setting. We demonstrate that IVIPER learns high-quality decision-tree policies for each agent. To better capture coordination between agents, we propose a novel centralized decision-tree training algorithm, MAVIPER. MAVIPER jointly grows the trees of each agent by predicting the behavior of the other agents using their anticipated trees, and uses resampling to focus on states that are critical for its interactions with other agents. We show that both algorithms generally outperform the baselines and that MAVIPER-trained agents achieve better-coordinated performance than IVIPER-trained agents on three different multi-agent particle-world environments.
翻译:在多试剂强化学习(MARL)方面最近的许多突破要求使用深层神经网络,这对人类专家解释和理解具有挑战性。另一方面,关于可解释强化学习(RL)的现有工作在从神经网络中提取更可解释的基于决定的植树政策方面显示了希望,但只是在单一试剂环境下才有这种希望。为了填补这一空白,我们提议了第一套算法,从与MARL培训的神经网络中提取可解释的决策树政策。第一套算法,即VIPER,将VIPER(一种可解释的单一试剂方法)延伸到多试剂设置。我们证明,VIPER(RL)学习了每种剂的高质量决策树政策。为了更好地在代理人之间进行协调,我们提出了一个新的集中决策树培训算法,MAVIPER(MAVIPER)。MAVIPER)通过预测其他代理人使用其预期的树木的行为来联合培育每个剂的树木,并使用重新显示对它与其他代理人的互动至关重要的国家。我们表明,这两种算法一般都超过基准,而且经过MAVIPER(MAVIPER)不同程度的代理人在不同的磁带环境上取得更好的业绩。