Machine Learning (ML) is now used in a range of systems with results that are reported to exceed, under certain conditions, human performance. Many of these systems, in domains such as healthcare , automotive and manufacturing, exhibit high degrees of autonomy and are safety critical. Establishing justified confidence in ML forms a core part of the safety case for these systems. In this document we introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). AMLAS comprises a set of safety case patterns and a process for (1) systematically integrating safety assurance into the development of ML components and (2) for generating the evidence base for explicitly justifying the acceptable safety of these components when integrated into autonomous system applications.
翻译:机器学习(ML)目前用于一系列系统,据报告,在某些条件下,其结果超过人的业绩,其中许多系统在保健、汽车和制造业等领域具有高度的自主性,具有高度的安全性,是安全的至关紧要因素。建立对ML的合理信任是这些系统安全案例的核心部分。在本文件中,我们引入了一套保证机器学习用于自治系统的方法。AMLAS包括一套安全案例模式和程序:(1) 系统地将安全保障纳入ML组成部分的开发中,(2) 建立证据基础,以明确证明这些组成部分在融入自主系统应用时可接受的安全。