Autonomous vehicles (AVs) must be both safe and trustworthy to gain social acceptance and become a viable option for everyday public transportation. Explanations about the system behaviour can increase safety and trust in AVs. Unfortunately, explaining the system behaviour of AI-based driving functions is particularly challenging, as decision-making processes are often opaque. The field of Explainability Engineering tackles this challenge by developing explanation models at design time. These models are designed from system design artefacts and stakeholder needs to develop correct and good explanations. To support this field, we propose an approach that enables context-aware, ante-hoc explanations of (un)expectable driving manoeuvres at runtime. The visual yet formal language Traffic Sequence Charts is used to formalise explanation contexts, as well as corresponding (un)expectable driving manoeuvres. A dedicated runtime monitoring enables context-recognition and ante-hoc presentation of explanations at runtime. In combination, we aim to support the bridging of correct and good explanations. Our method is demonstrated in a simulated overtaking.
翻译:自动驾驶车辆(AVs)必须具备安全性与可信赖性,才能获得社会认可并成为日常公共交通的可行选择。对系统行为的解释可提升自动驾驶车辆的安全性和信任度。然而,解释基于人工智能的驾驶功能系统行为尤为困难,因其决策过程往往不透明。可解释性工程领域通过在设计阶段开发解释模型应对这一挑战:这些模型基于系统设计构件和利益相关者需求构建,旨在生成准确且有效的解释。为支持该领域,我们提出一种方法,能够在运行时实现情境感知、事前解释的(非)预期驾驶操作。采用可视化形式化语言交通序列图来形式化解释情境及对应的(非)预期驾驶操作。通过专用运行时监控实现情境识别与运行时事前解释呈现。结合两者,我们致力于构建准确解释与有效解释之间的桥梁。本方法在模拟超车场景中进行了验证。