Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain properties are guaranteed with respect to all inputs in a space. However, little attention has been paid to floating point numerical error in neural network verification. We show that the negligence of floating point error is easily exploitable in practice. For a pretrained neural network, we present a method that efficiently searches inputs regarding which a complete verifier incorrectly claims the network is robust. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.
翻译:由于需要可靠地确定深层神经网络的坚固性,研究人员为深层神经网络制定了核查算法;鉴于神经网络,核查人员旨在回答空间中所有投入的某些属性是否得到保障的问题;然而,对神经网络核查中浮点数错误很少注意。我们表明,浮点误差的疏忽在实践中很容易被利用。对于训练有素的神经网络,我们提出了一个有效搜索投入的方法,对于这些投入,一个完整的核查者错误地声称网络是健全的。我们还提出了一个构建神经网络结构和重量的方法,造成不完整核查者错误的结果。我们的结果强调,为了实现对神经网络的实际可靠的核查,任何核查系统都必须准确(或保守地)地)模拟网络中任何浮点计算的影响。