As neural networks (NNs) become more prevalent in safety-critical applications such as control of vehicles, there is a growing need to certify that systems with NN components are safe. This paper presents a set of backward reachability approaches for safety certification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While backward reachability strategies have been developed for systems without NN components, the nonlinearities in NN activation functions and general noninvertibility of NN weight matrices make backward reachability for NFLs a challenging problem. To avoid the difficulties associated with propagating sets backward through NNs, we introduce a framework that leverages standard forward NN analysis tools to efficiently find over-approximations to backprojection (BP) sets, i.e., sets of states for which an NN policy will lead a system to a given target set. We present frameworks for calculating BP over approximations for both linear and nonlinear systems with control policies represented by feedforward NNs and propose computationally efficient strategies. We use numerical results from a variety of models to showcase the proposed algorithms, including a demonstration of safety certification for a 6D system.
翻译:随着神经网络(NNS)在车辆控制等安全关键应用中越来越普遍,越来越需要证明带有NN组件的系统是安全的。本文件为神经反馈环(NFLs)的安全认证提供了一套后向可达性方法,即带有NN控制政策的封闭环系统。虽然已经为没有NN组件的系统制定了后向可达性战略,但NN激活功能中的非线性功能和NNW重量矩阵一般不可忽略性使NFLs的后向可达性成为一项具有挑战性的问题。为了避免与NFLs向后传播组件有关的困难,我们引入了一个框架,利用标准前向NNP分析工具,高效率地发现超近近的神经反馈环(NFP)系统的安全认证(NBP)系统,即由NN政策引领一个系统达到既定目标的一组国家。我们提出了计算线性和非线性和非线性系统的BP过近性的框架,其控制政策由向NPsforford NNPs代表,并提出计算效率战略。我们使用一套模型的数值结果,用以展示拟议的安全演算。