Researchers have developed neural network verification algorithms motivated by the need to characterize the robustness of deep neural networks. The verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space. However, many verifiers inaccurately model floating point arithmetic but do not thoroughly discuss the consequences. We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice. For a pretrained neural network, we present a method that efficiently searches inputs as witnesses for the incorrectness of robustness claims made by a complete verifier. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.
翻译:研究人员开发了神经网络核查算法,其动机是需要说明深层神经网络的稳健性。核查者希望回答神经网络是否保证空间所有投入的某些特性。然而,许多核查者不准确地模拟浮点算术,但没有彻底讨论其后果。我们表明,浮点误差的疏忽导致不健全的核查,在实践中可以系统地加以利用。对于一个受过训练的神经网络,我们提出了一个方法,可以高效率地搜索投入,作为完整的核查者所声称的稳健性不正确的证据。我们还提出了一个构建神经网络结构和重量的方法,造成不完整核查者错误的结果。我们的结果突出表明,为了实现对神经网络的实际可靠的核查,任何核查系统都必须准确(或保守地)地模拟网络推断或核查系统中任何浮点计算的效果。