There has been growing interest in the development and deployment of autonomous vehicles on roads over the last few years, encouraged by the empirical successes of powerful artificial intelligence techniques (AI), especially in the applications of deep learning and reinforcement learning. However, as demonstrated by recent traffic accidents, autonomous driving technology is not mature for safe deployment. As AI is the main technology behind the intelligent navigation systems of self-driving vehicles, both the stakeholders and transportation jurisdictions require their AI-driven software architecture to be safe, explainable, and regulatory compliant. We propose a framework that integrates autonomous control, explainable AI, and regulatory compliance to address this issue and validate the framework with a critical analysis in a case study. Moreover, we describe relevant XAI approaches that can help achieve the goals of the framework.
翻译:过去几年来,在强大的人工智能技术的成功经验的鼓励下,人们日益关注在公路上开发和部署自主车辆,特别是在应用深层学习和强化学习方面。然而,正如最近的交通事故所证明的那样,自主驾驶技术并不成熟,不能安全部署。由于AI是自驾驶车辆智能导航系统的主要技术,因此,利益攸关方和运输管辖区都要求其由AI驱动的软件结构安全、可解释和符合监管要求。我们提出了一个框架,将自主控制、可解释的AI和遵守监管相结合,以解决这一问题,并在案例研究中用批判性分析来验证该框架。此外,我们描述了可有助于实现该框架目标的相关XAI方法。