Neural network verification aims to provide provable bounds for the output of a neural network for a given input range. Notable prior works in this domain have either generated bounds using abstract domains, which preserve some dependency between intermediate neurons in the network; or framed verification as an optimization problem and solved a relaxation using Lagrangian methods. A key drawback of the latter technique is that each neuron is treated independently, thereby ignoring important neuron interactions. We provide an approach that merges these two threads and uses zonotopes within a Lagrangian decomposition. Crucially, we can decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks. While each of these problems is provably hard, we provide efficient relaxation methods that are amenable to efficient dual ascent procedures. Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques in both time and bound tightness.
翻译:神经网络核查的目的是为特定输入范围的神经网络输出提供可辨别的界限。 这个领域先前的显著工程已经利用抽象域产生了界限,使网络中中间神经元保持某些依赖性;或者将核查作为优化问题,用拉格朗加方法解决了放松问题。 后一种技术的一个关键缺点是,每个神经元都得到独立的处理,从而忽视重要的神经互动。 我们提供了一个方法,将这两条线条合并起来,并在拉格朗加德方的分解中使用zonoopes。 关键是,我们可以将深神经网络的核查问题分解到许多两层神经网络的核查中。 尽管每一个问题都非常困难,但我们提供了有效的放松方法,可以有效地双向程序。 我们的技术在时间和紧紧紧的两种情况下都能够改进线性编程和以拉格朗加德为主的核查技术。