There has been growing interest in the development and deployment of autonomous vehicles on roads over the last few years, encouraged by the empirical successes of powerful artificial intelligence techniques (AI), especially in the applications of deep and reinforcement learning. However, recent traffic accidents with autonomous cars prevent this technology from being publicly acceptable at a broader level. As AI is the main driving force behind the intelligent navigation systems of self-driving vehicles, both the stakeholders and transportation jurisdictions require their AI-driven software architecture to be safe, explainable, and regulatory compliant. We present a framework that integrates autonomous control, explainable AI architecture, and regulatory compliance to address this issue. Moreover, we provide several conceptual models from this perspective to help guide future research directions.
翻译:过去几年来,在强大的人工智能技术(AI)的成功经验的鼓励下,人们日益关注在公路上开发和部署自主车辆,特别是在应用深层和强化学习方面。然而,最近与自主汽车发生的交通事故使这一技术无法在更广泛的层面上被公众接受。由于AI是自驾驶汽车智能导航系统的主要推动力,因此,利益攸关方和运输管辖区都要求其由AI驱动的软件结构安全、可解释和符合监管要求。我们提出了一个框架,将自主控制、可解释的AI结构以及遵守监管以解决这一问题结合起来。此外,我们从这一角度提供了几个概念模型,以帮助指导未来的研究方向。