The robustness of deep neural networks is crucial to modern AI-enabled systems and should be formally verified. Sigmoid-like neural networks have been adopted in a wide range of applications. Due to their non-linearity, Sigmoid-like activation functions are usually over-approximated for efficient verification, which inevitably introduces imprecision. Considerable efforts have been devoted to finding the so-called tighter approximations to obtain more precise verification results. However, existing tightness definitions are heuristic and lack theoretical foundations. We conduct a thorough empirical analysis of existing neuron-wise characterizations of tightness and reveal that they are superior only on specific neural networks. We then introduce the notion of network-wise tightness as a unified tightness definition and show that computing network-wise tightness is a complex non-convex optimization problem. We bypass the complexity from different perspectives via two efficient, provably tightest approximations. The results demonstrate the promising performance achievement of our approaches over state of the art: (i) achieving up to 251.28% improvement to certified lower robustness bounds; and (ii) exhibiting notably more precise verification results on convolutional networks.
翻译:深度神经网络的坚固性对于现代AI驱动系统至关重要,应当正式予以核实。 类似硅质的神经网络已被广泛应用。 由于其非线性,类似硅质的激活功能通常对于有效核查而言过于接近,这不可避免地会带来不精确性。 已经做出大量努力,寻找所谓的更紧的近似点,以获得更准确的核查结果。 但是,现有的紧密性定义是杂乱无章的,缺乏理论基础。 我们对现有的神经中近似神经的紧凑性特征进行了彻底的经验分析,并表明它们只优于特定的神经网络。 我们随后将网络中的紧凑性概念作为统一的紧凑性定义加以介绍,并表明计算网络中的紧凑性是一个复杂的非convex优化问题。 我们通过两种高效、更近似、更紧凑的近似的近似近似方法,从不同的角度绕过复杂之处。 结果表明,我们的方法在艺术状态上取得了良好的业绩:(i) 达到251.28 %的改进到经认证的较低坚固度界限上;以及(ii) 展示更精确的远不精确的革命网络的核查结果。