This survey reviews explainability methods for vision-based self-driving systems. The concept of explainability has several facets and the need for explainability is strong in driving, a safety-critical application. Gathering contributions from several research fields, namely computer vision, deep learning, autonomous driving, explainable AI (X-AI), this survey tackles several points. First, it discusses definitions, context, and motivation for gaining more interpretability and explainability from self-driving systems. Second, major recent state-of-the-art approaches to develop self-driving systems are quickly presented. Third, methods providing explanations to a black-box self-driving system in a post-hoc fashion are comprehensively organized and detailed. Fourth, approaches from the literature that aim at building more interpretable self-driving systems by design are presented and discussed in detail. Finally, remaining open-challenges and potential future research directions are identified and examined.
翻译:本调查审查了基于愿景的自我驾驶系统的可解释性方法。 解释性概念有几个方面, 解释性概念在驾驶方面十分必要, 解释性概念在驱动性、 安全性关键应用方面十分必要。 从计算机视野、 深层学习、 自主驾驶、 可解释的AI(X-AI)等几个研究领域收集资料,本调查涉及几个问题。 首先,它讨论了定义、背景以及从自驾驶系统中获得更多可解释性和解释性的积极性。 其次,迅速介绍了最近开发自驾驶系统的最新最新先进方法。 第三,全面组织和详细组织为黑箱自我驾驶系统提供解释的方法。 第四,介绍并详细讨论了旨在通过设计建立更可解释的自我驾驶系统的文献方法。最后,查明并研究了其余的开放挑战和未来可能的研究方向。