Neural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs -- i.e. polytopic specifications on the input and output of the network. Like some other approaches, ours uses a relaxed convex program to mitigate the combinatorial complexity of the problem. However, unique in our approach is the way we use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions. In particular, we encode each ReLU by means of the usual linear constraints, and combine this with a convex objective function that penalizes the discrepancy between the output of each neuron and its relaxation. This convex function is further structured to force the largest relaxations to appear closest to the input layer; this provides the further benefit that the most problematic neurons are conditioned as early as possible, when conditioning layer by layer. This paradigm can be leveraged to create a verification algorithm that is not only faster in general than competing approaches, but is also able to verify considerably more safety properties; we evaluated PEREGRiNN on a standard MNIST robustness verification suite to substantiate these claims.
翻译:神经网络(NNs)具有与现实世界应用中扩散相适应的日益明显的安全影响:意料之外和对抗性错误分类都可能导致致命的结果。因此,正式核查技术被公认为是设计和部署安全NNs的关键。在本文中,我们引入了一种新的方法,正式核查RELU NNs最常考虑的安全规格,即网络投入和输出的多立面规格。像其他一些方法一样,我们使用宽松的康维克斯程序来减轻问题的复杂性。然而,我们的方法中的独特性是我们使用一个康维克斯解析器的方式,不仅作为线性可行性检查器,而且作为惩罚安全NNNPs中允许的放松程度的手段。特别是,我们采用新的方法,正式核查RLU的每个最常见安全规格 -- -- 即网络输入和输出的多立面规格。我们使用一个宽松的康维克斯程序来减轻问题的复杂复杂性。这个套装式功能进一步把最大的放松到与输入层最接近的程度。 这套方法不仅能进一步修正,而且还能进一步修正神经结构的升级,而且能进一步使神经结构得到更高的标准。