Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.
翻译:完整的深度神经网络(DNN)验证可以精确地确定DNN在无限输入集上是否满足期望的可信属性(例如,健壮性,公平性)。尽管多年来完整验证器在单个DNN上提高可扩展性方面取得了巨大进展,但当部署的DNN被更新以提高其推理速度或准确性时,它们固有地效率低下。这是因为需要从头开始运行昂贵的验证器才能在更新后的DNN上运行。为了提高效率,我们提出了一种新的、通用的DNN增量和完整验证框架,它基于新颖的理论、数据结构和算法设计。我们在一个名为IVAN的工具中实现的贡献,在具有挑战性的MNIST和CIFAR10分类器上产生了总体几何平均加速比2.4倍,在ACAS-XU分类器上产生了几何平均加速比3.8倍,超过了现有技术的基线。