Many available formal verification methods have been shown to be instances of a unified Branch-and-Bound (BaB) formulation. We propose a novel machine learning framework that can be used for designing an effective branching strategy as well as for computing better lower bounds. Specifically, we learn two graph neural networks (GNN) that both directly treat the network we want to verify as a graph input and perform forward-backward passes through the GNN layers. We use one GNN to simulate the strong branching heuristic behaviour and another to compute a feasible dual solution of the convex relaxation, thereby providing a valid lower bound. We provide a new verification dataset that is more challenging than those used in the literature, thereby providing an effective alternative for testing algorithmic improvements for verification. Whilst using just one of the GNNs leads to a reduction in verification time, we get optimal performance when combining the two GNN approaches. Our combined framework achieves a 50\% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to several state-of-the-art verification methods. In addition, we show that our GNN models generalize well to harder properties on larger unseen networks.
翻译:许多可用的正式核查方法被证明是一个统一的分支和组合(BAB)配方的事例。我们提出了一个新的机器学习框架,可用于设计有效的分支战略和计算更低的范围。具体地说,我们学习了两个图形神经网络(GNN),这两个网络都直接将我们想要核查的网络作为图形输入处理,并通过GNN层次向前向后传递。我们用一个GNN来模拟强大的分支超常行为,另一个用来计算对卷轴放松的一种可行的双向解决办法,从而提供一个有效的较低约束。我们提供了一个新的核查数据集,比文献中使用的数据集更具挑战性,从而为检验核查的算法改进提供了有效的替代方法。虽然只使用一个GNNN可以缩短核查时间,但我们在将GNN方法结合起来时,我们取得最佳的性能。我们的联合框架在各种共进网络中,与几个最先进的核查方法相比,可以减少分支的数量和核查时间。此外,我们展示了我们GNNNM模型在更大范围的模型上更难实现整体性。