Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
翻译:动态预测系统旨在捕捉使自主车辆能够安全、高效规划的交通情景的未来行为,这些情景的演进高度不确定,取决于在现场与静态和动态物体打交道的代理人的相互作用。基于GNNN的做法最近引起注意,因为它们非常适合自然模拟这些互动。然而,尚未探讨的主要挑战之一是如何处理这些模型的复杂性和不透明性,以便处理自主驾驶系统的透明度要求,其中包括可解释性和可解释性等内容。在这项工作中,我们的目标是利用不同方法改进运动预测系统的可解释性。首先,我们提出一个新的基于静态和动态物体的代理人互动。基于GNNNNNNNNNNN的处理办法最近得到了关注。基于GNNNN和动态物体的政策(XHGP)模型的模型模型模型模型模型模型模型模型模型模型模型模型模型模型模型模型模型模型(XHGP)模型模型模型模型模型模型模型模型的模型分析,通过探索的敏感度来解释个人动态系统/脚本的系统。在浏览过程中,通过对动态系统进行某些经过培训的动态分析来解释。