Deep learning methods can be used to produce control policies, but certifying their safety is challenging. The resulting networks are nonlinear and often very large. In response to this challenge, we present OVERT: a sound algorithm for safety verification of nonlinear discrete-time closed loop dynamical systems with neural network control policies. The novelty of OVERT lies in combining ideas from the classical formal methods literature with ideas from the newer neural network verification literature. The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds. Such piecewise linear bounds are designed for seamless integration into ReLU neural network verification tools. OVERT can be used to prove bounded-time safety properties by either computing reachable sets or solving feasibility queries directly. We demonstrate various examples of safety verification for several classical benchmark examples. OVERT compares favorably to existing methods both in computation time and in tightness of the reachable set.
翻译:深度学习方法可以用来制定控制政策, 但验证其安全性却具有挑战性。 由此形成的网络是非线性且往往非常庞大的网络。 为了应对这一挑战, 我们提出了 ORT: 使用神经网络控制政策对非线性离散闭环状动态系统进行安全核查的合理算法。 OFT的新颖之处在于将传统正规方法文献中的想法与新神经网络核查文献中的想法结合起来。 OFT的中心概念是抽象的非线性功能, 并配有一套最优紧凑的片段形线性线性线性边框。 这种单线性线性线性边框是为了无缝地融入 ReLU 神经网络核查工具而设计的。 可以通过计算可达到的数据集或直接解决可行性查询来证明约束性的时间安全性。 我们为几个经典基准示例展示了各种安全性核查实例。 OVT在计算时间和可达到的数据集的紧凑度方面优于现有的方法。