Safety assurance is a central concern for the development and societal acceptance of automated driving (AD) systems. Perception is a key aspect of AD that relies heavily on Machine Learning (ML). Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components. Unfortunately, AD safety cases express safety requirements at the system-level and these efforts are missing the critical linking argument connecting safety requirements at the system-level to component performance requirements at the unit-level. In this paper, we propose a generic template for such a linking argument specifically tailored for perception components. The template takes a deductive and formal approach to define strong traceability between levels. We demonstrate the applicability of the template with a detailed case study and discuss its use as a tool to support incremental development of perception components.
翻译:安全保障是自动驾驶系统的发展和社会接受的一个中心问题; 认识是自动驾驶系统的一个关键方面,严重依赖机器学习(ML); 尽管已知在以ML为基础的部件的安全保障方面存在挑战,但最近提出了处理这些部件的单位级安全案例的建议; 不幸的是,自动驾驶安全案例表明系统一级的安全要求,而这些努力缺乏将系统一级的安全要求与单位一级部分业绩要求联系起来的重要论据; 本文中,我们提议为这种专门针对感知组成部分的关联论点提供一个通用模板; 模板采取了一种推理和正式的办法,界定不同级别之间强有力的可追踪性; 我们展示模板的可适用性,进行详细的案例研究,并讨论将其用作支持感知组成部分逐步发展的工具。