Safety assurance is a central concern for the development and societal acceptance of automated driving (AD) systems. Perception is a key aspect of AD that relies heavily on Machine Learning (ML). Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components. Unfortunately, AD safety cases express safety requirements at the system level and these efforts are missing the critical linking argument needed to integrate safety requirements at the system level with component performance requirements at the unit level. In this paper, we propose the Integration Safety Case for Perception (ISCaP), a generic template for such a linking safety argument specifically tailored for perception components. The template takes a deductive and formal approach to define strong traceability between levels. We demonstrate the applicability of ISCaP with a detailed case study and discuss its use as a tool to support incremental development of perception components.
翻译:安全保障是发展和社会接受自动驾驶系统(AD)的一个中心问题; 认知是AD的一个关键方面,严重依赖机器学习(ML); 尽管已知在ML部件的安全保障方面存在挑战,但最近出现了针对这些部件的单位级安全案例的建议; 不幸的是,AD安全案例在系统一级表达了安全要求,而这些努力也缺少将系统一级的安全要求与单位一级部分绩效要求相结合所需的关键联系论点; 在本文件中,我们提议了Percion综合安全案例(ISCaP),这是专门针对感知组件的这种联系安全论点的通用模板; 模板采取了一种推理和正式的办法,界定各级之间强有力的可追踪性; 我们通过详细案例研究表明ISCaP的适用性,并讨论将其用作支持认知组成部分逐步发展的工具。