We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks. First, we propose novel bounding algorithms based on Lagrangian Decomposition. Previous works have used off-the-shelf solvers to solve relaxations at each node of the BaB tree, or constructed weaker relaxations that can be solved efficiently, but lead to unnecessarily weak bounds. Our formulation restricts the optimization to a subspace of the dual domain that is guaranteed to contain the optimum, resulting in accelerated convergence. Furthermore, it allows for a massively parallel implementation, which is amenable to GPU acceleration via modern deep learning frameworks. Second, we present a novel activation-based branching strategy. By coupling an inexpensive heuristic with fast dual bounding, our branching scheme greatly reduces the size of the BaB tree compared to previous heuristic methods. Moreover, it performs competitively with a recent strategy based on learning algorithms, without its large offline training cost. Finally, we design a BaB framework, named Branch and Dual Network Bound (BaDNB), based on our novel bounding and branching algorithms. We show that BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial robustness properties.
翻译:我们改进了分支和Bound (BAB) 算法的可缩放性, 以正式验证神经网络的输入- 输出特性。 首先, 我们提出基于Lagrangian Decomposition的新型捆绑算法。 先前的工程使用现成的解析器解决BAB树每个节点的松绑, 或者构建了较弱的松绑, 能够有效解决, 但却导致不必要地弱化界限。 我们的配方将优化限制在双向域的子空间, 保证能控制最佳, 从而加速趋同。 此外, 它允许大规模平行的实施, 可以通过现代深层次学习框架加速 GPU 。 其次, 我们提出了一种新的基于激活的分支化战略。 通过将廉价的双层绑定法和快速绑定法结合, 我们的BABBB框架, 名为分支和双端网络 Bound(BADNB), 其基础是基于我们的新式的稳健度校验算法, 展示了我们之前的稳健的系统。