As a new programming paradigm, deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains. While there are techniques for verifying DNNs with formal guarantees, they are limited in scalability and accuracy. In this paper, we present a novel abstraction-refinement approach for scalable and exact DNN verification. Specifically, we propose a novel abstraction to break down the size of DNNs by over-approximation. The result of verifying the abstract DNN is always conclusive if no spurious counterexample is reported. To eliminate spurious counterexamples introduced by abstraction, we propose a novel counterexample-guided refinement that refines the abstract DNN to exclude a given spurious counterexample while still over-approximating the original one. Our approach is orthogonal to and can be integrated with many existing verification techniques. For demonstration, we implement our approach using two promising and exact tools Marabou and Planet as the underlying verification engines, and evaluate on widely-used benchmarks ACAS Xu, MNIST and CIFAR-10. The results show that our approach can boost their performance by solving more problems and reducing up to 86.3% and 78.0% verification time, respectively. Compared to the most relevant abstraction-refinement approach, our approach is 11.6-26.6 times faster.
翻译:作为新的编程范式,深度神经网络(DNNs)在实践中得到越来越多的部署,但缺乏稳健性阻碍了其在安全关键领域的应用。虽然有以正式保证核查DNS的技术,但它们的可缩放性和准确性有限。在本文件中,我们提出了一个新的抽象精细化方法,用于可缩放和精确的DNN的核查。具体地说,我们建议采用新的抽象抽象方法,通过过度使用来打破DNNs的规模。如果没有报告虚假的反示例,那么对抽象的DNNN的核查结果总是有结论性的。为了消除抽象引入的虚假反抽样,我们建议采用新的反抽样改进技术,以完善抽象的DNNNS的可缩放性和准确性。我们提出了一种新的抽象改进方法,在仍然过度使用原有的DNNNN核查方法时,我们的方法与许多现有的核查技术不相近似,而且可以与许多现有的核查技术相结合。为了示范,我们采用两种有希望和精确的工具Marabou和Plan作为基本核查引擎的方法,并评估广泛使用的基准。78. 我们提出的加速推进性基准的ACAS XU3和CAR10的方法可以分别显示我们最迅速的成绩。