It is important for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. This is a difficult problem, especially in urban environments with interactions between many vehicles. Goal recognition methods must be fast to run in real time and make accurate inferences. As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified. Existing goal recognition methods for autonomous vehicles fail to satisfy all four objectives of being fast, accurate, interpretable and verifiable. We propose Goal Recognition with Interpretable Trees (GRIT), a goal recognition system which achieves these objectives. GRIT makes use of decision trees trained on vehicle trajectory data. We evaluate GRIT on two datasets, showing that GRIT achieved fast inference speed and comparable accuracy to two deep learning baselines, a planning-based goal recognition method, and an ablation of GRIT. We show that the learned trees are human interpretable and demonstrate how properties of GRIT can be formally verified using a satisfiability modulo theories (SMT) solver.
翻译:自主驾驶车辆必须有能力推断其他车辆的目标(目标识别),以便与其他车辆安全地互动并预测其未来轨迹。这是一个困难的问题,特别是在城市环境中,许多车辆之间相互作用。目标识别方法必须能迅速实时运行,并作出准确的推理。自主驾驶是安全的关键,因此,必须掌握可进行人类解释和可正式核实的安全性的方法。自主驾驶车辆的现有目标识别方法不能满足快速、准确、可解释和可核查的所有四项目标。我们建议对可解释树(目标识别系统)进行目标识别,这是实现这些目标的一个目标识别系统。GRIT利用了经过车辆轨迹数据培训的决策树。我们评估了两个数据集的GRIT,表明GRIT快速的推导速度和相当的准确性达到了两个深层学习基线,一个基于规划的目标识别方法,以及GRIT的膨胀。我们表明,所学过的树木是人可解释的,并表明如何用可对可解释性模型(SMT)进行正式核实。