Neural networks are increasingly applied to support decision making in safety-critical applications (like autonomous cars, unmanned aerial vehicles and face recognition based authentication). While many impressive static verification techniques have been proposed to tackle the correctness problem of neural networks, it is possible that static verification may never be sufficiently scalable to handle real-world neural networks. In this work, we propose a runtime verification method to ensure the correctness of neural networks. Given a neural network and a desirable safety property, we adopt state-of-the-art static verification techniques to identify strategically locations to introduce additional gates which "correct" neural network behaviors at runtime. Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties, whilst being consistent with the original neural network most of the time.
翻译:神经网络越来越多地被用于支持安全关键应用的决策(如自主汽车、无人驾驶飞行器和面部识别认证 ) 。 虽然提出了许多令人印象深刻的静态核查技术以解决神经网络的正确性问题,但静态核查可能永远不可能足以处理真实世界神经网络。 在这项工作中,我们提出了一个运行时间核查方法,以确保神经网络的正确性。鉴于神经网络和理想的安全特性,我们采用了最先进的静态核查技术,以确定战略位置,以引入“纠正”神经网络运行时行为的额外大门。实验结果显示,我们的方法有效地生成神经网络,保证满足这些特性,同时与大多数时间的原始神经网络保持一致。