With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch -- effectively ``wasting'' its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using \emph{residual reasoning}: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.
翻译:随着神经网络作为特派团关键系统组成部分的一体化程度不断提高,越来越需要确保它们满足各种安全和生命要求。近年来,为此目的提出了许多健全和完整的核查方法,但这些方法通常都有严重的可缩放性限制。最近的工作提议用抽象-精炼能力加强这种核查技术,这些能力已证明可以提高可缩放性:核查者不核查一个庞大和复杂的网络,而是建立并随后核查一个小得多的网络,其正确性意味着原始网络的正确性。这种机制的缺点是,如果对较小的网络进行核查失败,核查者就需要采取改进步骤,增加所核查的网络的规模,然后开始从头开始核查新的网络 -- -- 有效地“这是它以前在核查较小网络方面的工作,这已证明了可扩缩性:我们通过使用电子网络来改进对神经网络进行抽象核查,其正确性推理:在核查获得的抽象网络时利用获得的信息的过程,以便加快对改进后的网络的核查,从而加速对改进后网络的核查。在本质上,马尔能够将新的网络进行核查,从而能够正确地将空间结果集中到所发现的领域。