Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems, such as image recognition. However, DNNs are far from infallible, and incorrect behavior of DNNs can have disastrous real-world consequences. This paper addresses the problem of architecture-preserving V-polytope provable repair of DNNs. A V-polytope defines a convex bounded polytope using its vertex representation. V-polytope provable repair guarantees that the repaired DNN satisfies the given specification on the infinite set of points in the given V-polytope. An architecture-preserving repair only modifies the parameters of the DNN, without modifying its architecture. The repair has the flexibility to modify multiple layers of the DNN, and runs in polynomial time. It supports DNNs with activation functions that have some linear pieces, as well as fully-connected, convolutional, pooling and residual layers. To the best our knowledge, this is the first provable repair approach that has all of these features. We implement our approach in a tool called APRNN. Using MNIST, ImageNet, and ACAS Xu DNNs, we show that it has better efficiency, scalability, and generalization compared to PRDNN and REASSURE, prior provable repair methods that are not architecture preserving.
翻译:深度神经网络(DNN)正在成为软件的越来越重要的组件,并被认为是解决许多问题的最先进解决方案,如图像识别。然而,DNN的失误非常常见,其错误行为可能导致灾难性的现实后果。本文解决了深度神经网络的架构保留V-polytope可证明修复问题。V-polytope使用其顶点表示定义了一个凸有界多面体。V-polytope可证明修复保证修复后的DNN在给定的V-polytope包含的无穷点集中满足给定的规范。架构保留修复仅修改DNN的参数,而不修改其体系结构。该修复具有修改DNN的多个层的灵活性,并在多项式时间内运行。它支持具有一些线性部分的激活函数以及完全连接的、卷积的、池化的和残差的层。据我们所知,这是第一个具有所有这些功能的可证明修复方法。我们在一个名为APRNN的工具中实现了我们的方法。使用MNIST、ImageNet和ACAS Xu DNNs,我们展示了与不保留架构的先前可证明修复方法PRDNN和REASSURE相比具有更好的效率、可扩展性和概括能力。