This survey reviews explainability methods for vision-based self-driving systems trained with behavior cloning. The concept of explainability has several facets and the need for explainability is strong in driving, a safety-critical application. Gathering contributions from several research fields, namely computer vision, deep learning, autonomous driving, explainable AI (X-AI), this survey tackles several points. First, it discusses definitions, context, and motivation for gaining more interpretability and explainability from self-driving systems, as well as the challenges that are specific to this application. Second, methods providing explanations to a black-box self-driving system in a post-hoc fashion are comprehensively organized and detailed. Third, approaches from the literature that aim at building more interpretable self-driving systems by design are presented and discussed in detail. Finally, remaining open-challenges and potential future research directions are identified and examined.
翻译:本次调查审查了以行为克隆方式培训的基于愿景的自我驾驶系统的解释性方法。 解释性概念有几个方面, 解释性概念在驾驶方面十分必要, 并且对安全性至关重要的应用程序十分必要。 从计算机视野、 深层学习、 自主驾驶、 可解释的AI(X-AI)等几个研究领域收集资料, 解决了几个问题。 首先, 调查讨论了自驾驶系统的定义、 背景和获得更多解释性的动机, 以及这一应用所特有的挑战。 其次, 全面组织和详细组织了向黑箱自我驾驶系统提供解释的方法。 第三, 介绍并详细讨论了旨在通过设计建立更可解释的自我驾驶系统的文献方法。 最后, 确定并审查了其余的公开挑战和未来可能的研究方向。