Safety-critical Autonomous Systems require trustworthy and transparent decision-making process to be deployable in the real world. The advancement of Machine Learning introduces high performance but largely through black-box algorithms. We focus the discussion of explainability specifically with Autonomous Vehicles (AVs). As a safety-critical system, AVs provide the unique opportunity to utilize cutting-edge Machine Learning techniques while requiring transparency in decision making. Interpretability in every action the AV takes becomes crucial in post-hoc analysis where blame assignment might be necessary. In this paper, we provide positioning on how researchers could consider incorporating explainability and interpretability into design and optimization of separate Autonomous Vehicle modules including Perception, Planning, and Control.
翻译:安全关键的自治系统要求可信和透明的决策程序在现实世界中可以部署。机器学习的进步带来了高性能,但主要是通过黑盒算法。我们专门与自治车辆(AVs)讨论可解释性。作为一个安全关键系统,AVs提供了利用尖端机器学习技术的独特机会,同时要求决策透明。AV采取的每一项行动在需要责怪分配的情况下,都变得至关重要。我们在本文件中就研究人员如何考虑将可解释性和可解释性纳入独立自主车辆模块(包括概念、规划和控制)的设计和优化提供了定位。