Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches.
翻译:对神经网络(NNs)的正式核查是一个具有挑战性和重要的问题。现有的高效完整解答器通常需要分支和约束(BAB)进程,它将问题域分割成子域,用更快但较弱的不完整校验器解决每个子域,如线性宽松子域线式编程程序(LP),在线性宽松子域内,我们提议在BAB进程中使用基于扰动的后向模式线性线性放松分析(LiRPA)来取代LP,它可以在典型的机器学习加速器(如GPUs和TPUs)上有效应用。然而,与LP、LRPA不同,如果天真地应用,它可能会产生更弱得多的界限,甚至无法在分裂期间检查子域网的某些冲突,使整个程序在BAB之后不完全完成。为了应对这些挑战,我们采用了基于分批分和设计最起码使用LP约束程序(LRPA)的快速的收缩程序,使我们能够有效地使用LRPA的计算机硬硬件来挑战性地完成GPNNU级的完整速度核查方法。