Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification community has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas' lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases and requires only minimal overhead.
翻译:深神经网络(DNN)越来越多地被用于安全临界系统,因此迫切需要保证其正确性。因此,核查界设计了多种技术和工具来核查DNN。当DNN核查员发现一个触发错误的输入,这很容易确认;但是,当他们报告说没有错误时,没有办法确保核查工具本身没有缺陷。由于DNN核查工具中已经观察到多重错误,这就要求DNN核查的可适用性受到质疑。在这项工作中,我们提出了一个新的机制,用验证生产能力加强基于简易的DNNNN核查器:生成一个容易检查的不满意性证人,这证明没有错误。我们的证据生产基于对众所周知的Farkas的灵长针的高效调整,加上处理小线函数和数字精确错误的机制。作为概念的证明,我们在Marabou DNNN核查器上应用了我们的技术。我们对于一个安全临界的空中碰撞避免系统所作的评估仅表明,所有情况下都需要最起码的证明。