Deep reinforcement learning (RL) has recently led to many breakthroughs on a range of complex control tasks. However, the agent's decision-making process is generally not transparent. The lack of interpretability hinders the applicability of RL in safety-critical scenarios. While several methods have attempted to interpret vision-based RL, most come without detailed explanation for the agent's behavior. In this paper, we propose a self-supervised interpretable framework, which can discover interpretable features to enable easy understanding of RL agents even for non-experts. Specifically, a self-supervised interpretable network (SSINet) is employed to produce fine-grained attention masks for highlighting task-relevant information, which constitutes most evidence for the agent's decisions. We verify and evaluate our method on several Atari 2600 games as well as Duckietown, which is a challenging self-driving car simulator environment. The results show that our method renders empirical evidences about how the agent makes decisions and why the agent performs well or badly, especially when transferred to novel scenes. Overall, our method provides valuable insight into the internal decision-making process of vision-based RL. In addition, our method does not use any external labelled data, and thus demonstrates the possibility to learn high-quality mask through a self-supervised manner, which may shed light on new paradigms for label-free vision learning such as self-supervised segmentation and detection.
翻译:深入强化学习(RL)最近在一系列复杂的控制任务上取得了许多突破。然而,该代理人的决策过程一般不透明。缺乏解释性妨碍了该代理人在安全危急情况下的适用性。虽然有几种方法试图解释基于愿景的RL,但多数方法都没有详细解释该代理人的行为。在本文件中,我们提出了一个自我监督的解释性框架,它可以发现可解释的特征,使即使非专家也能轻松理解该代理人。具体地说,一个自我监督的可解释性网络(SSINet)被用来制作精细的注意面罩,用于突出任务相关信息,这构成了该代理人所作决定的大多数证据。我们核查和评价了我们有关Atari 2600游戏以及Duckietown行为的方法,这是一个具有挑战性的自我驾驶汽车模拟环境。结果显示,我们的方法提供了经验证据,说明该代理人是如何做出决策的,以及该代理人为何表现良好或糟糕,特别是在转移到新的场景时。 总体而言,我们的方法提供了对内部机密识别性信息的精细面面面,这是该代理人所作决定的多数证据,因此,我们通过一个高层次的自我学习方法,通过一种高层次的自我学习方法,从而学习任何以学习任何视野的外部结构的自我结构。