Autonomous driving has achieved significant milestones in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate reliably without human intervention. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how their decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds comprehensive light on the development of explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the state-of-the-art studies on XAI for autonomous driving. We then propose an XAI framework that considers the societal and legal requirements for the explainability of autonomous driving systems. Finally, as future research directions, we provide several XAI approaches that can improve operational safety and transparency to support public approval of autonomous driving technology by regulators and engaged stakeholders.
翻译:过去十年来,自主驾驶在研发方面取得了重要的里程碑,对该领域的兴趣日益浓厚,因为自行驾驶车辆的部署将带来更安全和更无害生态的运输系统。随着计算强大的人工智能技术的抬头,自主驾驶车辆可以高精准地感知其环境,作出安全实时决定,在没有人类干预的情况下可靠地运行。然而,在目前先进水平上,自主驾驶汽车的智能决策一般不为人类所理解,这种缺陷阻碍了这种技术在社会上不被接受。因此,除了安全实时决策之外,自主驾驶车辆的自动驾驶系统还需要解释其决定是如何构建的,以便在许多管辖区遵守监管的。我们的研究全面揭示了可解释的人工智能方法的发展,特别是我们作出以下贡献:首先,我们全面概述目前人类对自主驾驶汽车的先进研究,然后我们提出一个 XAI 框架,考虑自主驾驶系统的社会和法律要求。最后,作为未来的研究方向,我们提供数个自主的 XAI 监管者采用的方法,可以改进操作安全和操作上的透明度,通过公共验证。