Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a verifier which can handle realistic networks in a precise manner remains an open and difficult challenge. In this paper, we take a major step in addressing this challenge and present a new framework, called PRIMA, that computes precise convex approximations of arbitrary non-linear activations. PRIMA is based on novel approximation algorithms that compute the convex hull of polytopes, leveraging concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on challenging neural networks with ReLU, Sigmoid, and Tanh activations. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness for up to 16%, 30%, and 34% more images than prior work on ReLU-, Sigmoid-, and Tanh-based networks, respectively.
翻译:对神经网络的正式核查对于在现实世界应用中安全应用神经网络至关重要。 但是,设计一个能够准确处理现实网络的核查器仍然是一项公开和困难的挑战。 在本文件中,我们在应对这一挑战方面迈出了一大步,并提出了一个新的框架,称为PRIMA,它计算了任意非线性激活的精确等离子近似值。 PRIMA基于新颖的近似算法,它计算了多面体的锥体壳,利用了计算几何学的概念。这些算法具有多重性复杂性,产生较少的限制,并最大限度地减少精确损失。我们评估了PriMA在与ReLU、Sigmoid和Tanh启动的具有挑战性的神经网络上的有效性。我们的结果显示,PRIMA比最新数据要精确得多,比以前关于ReLU、Sigmoid-和Tanh网络的工作分别高出16%、30%和34%。