With the increasing use of Machine Learning (ML) in critical autonomous systems, runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations. Monitors have been proposed for different applications involving diverse perception tasks and ML models, and specific evaluation procedures and metrics are used for different contexts. This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain), the remaining safety gaps after using it (Residual Hazard), and its negative impact on the system's performance (Availability Cost). To compute these metrics, one requires to define two return functions, representing how a given ML prediction will impact expected future rewards and hazards. Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics. Experimental results on these examples show how different evaluation choices impact the perceived performance of a monitor. As our formalism requires us to formulate explicit safety assumptions, it allows us to ensure that the evaluation conducted matches the high-level system requirements.
翻译:随着在关键的自主系统中越来越多地使用机器学习(ML),开发了运行时间监测器,以发现预测错误,并使系统在运行期间保持安全状态;为涉及不同感知任务和ML模型的不同应用提出了监测器;为不同背景采用了具体的评价程序和衡量标准;本文件介绍了三个统一的、面向安全的指标,代表了监测器的安全效益(安全收益)、使用后剩余的安全差距(救生危险)及其对系统性能的负面影响(可行性成本);为计算这些指标,需要界定两个返回功能,即特定ML预测如何影响预期的未来回报和危害;使用三个使用案例(分类、无人驾驶和自主驾驶)来说明如何用拟议指标来表达文献中的衡量标准;这些实例的实验结果显示不同的评价选择如何影响人们所认为的监测工作。由于我们的形式主义要求我们制定明确的安全假设,因此能够确保所进行的评价符合高级系统的要求。