We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics. Neural networks have extensively been used before as approximators; in this work, we make a step further and use them for the first time as abstractions. For a given dynamical model, our method synthesises a neural network that overapproximates its dynamics by ensuring an arbitrarily tight, formally certified bound on the approximation error. For this purpose, we employ a counterexample-guided inductive synthesis procedure. We show that this produces a neural ODE with non-deterministic disturbances that constitutes a formal abstraction of the concrete model under analysis. This guarantees a fundamental property: if the abstract model is safe, i.e., free from any initialised trajectory that reaches an undesirable state, then the concrete model is also safe. By using neural ODEs with ReLU activation functions as abstractions, we cast the safety verification problem for nonlinear dynamical models into that of hybrid automata with affine dynamics, which we verify using SpaceEx. We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models. We additionally demonstrate and that it is effective on models that do not exhibit local Lipschitz continuity, which are out of reach to the existing technologies.
翻译:我们提出了一个用于非线性动态模型安全核查的新方法,该方法使用神经网络来代表其动态的抽象性。神经网络曾被广泛用作近似模型;在这项工作中,我们向前迈出了一步,并首次用作抽象性。对于给定的动态模型,我们的方法合成了一个神经网络,该神经网络通过确保一个任意紧凑、正式核证的近似误差来使其动态过度接近其动态性。为此,我们采用了一种反线性导导导导导导导入性合成程序。我们显示,这产生了一种非非非线性干扰的神经模型,构成对所分析的具体模型的正式抽象的抽象性。这保证了一种基本属性:如果抽象模型是安全的,即没有出现任何不可取状态的初始轨迹,那么具体模型也是安全的。我们使用RelLU激活功能的神经代码,将安全核查问题投放到了非线性动态模型中,我们用空间Exexmexmexitexexexexexitexitive Expil eximmal 模型来核查。我们用不易变现的工具模型展示了不成熟的模型。我们不动的模型。