Certifying the robustness of neural networks against adversarial attacks is essential to their reliable adoption in safety-critical systems such as autonomous driving and medical diagnosis. Unfortunately, state-of-the-art verifiers either do not scale to bigger networks or are too imprecise to prove robustness, limiting their practical adoption. In this work, we introduce GPUPoly, a scalable verifier that can prove the robustness of significantly larger deep neural networks than previously possible. The key technical insight behind GPUPoly is the design of custom, sound polyhedra algorithms for neural network verification on a GPU. Our algorithms leverage the available GPU parallelism and inherent sparsity of the underlying verification task. GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34.5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.
翻译:证明神经网络对对抗性攻击的稳健性,对于在诸如自主驾驶和医学诊断等安全临界系统中可靠地采用神经网络至关重要。 不幸的是,最先进的验证员要么没有规模到更大的网络,要么过于不精确,无法证明网络的稳健性,限制了实际的采用。 在这项工作中,我们引入了GPUPPolly,这是一个可扩缩的验证器,可以证明比以前可能大得多的深层神经网络的稳健性。GPUPolly背后的关键技术洞察力是设计在GPU上进行神经网络核查的习惯、健全的多希德拉算法。 我们的算法利用了现有的GPU平行法和基本核查任务固有的松散性。 GPUPOLy对大型网络的衡量器:例如,它可以证明在大约34.5米内有1M神经、34层深海残余网络的稳健性。 我们认为GPUPolly是切实核查现实世界神经网络的一个有希望的步骤。