Deep learning has become a promising programming paradigm in software development, owing to its surprising performance in solving many challenging tasks. Deep neural networks (DNNs) are increasingly being deployed in practice, but are limited on resource-constrained devices owing to their demand for computational power. Quantization has emerged as a promising technique to reduce the size of DNNs with comparable accuracy as their floating-point numbered counterparts. The resulting quantized neural networks (QNNs) can be implemented energy-efficiently. Similar to their floating-point numbered counterparts, quality assurance techniques for QNNs, such as testing and formal verification, are essential but are currently less explored. In this work, we propose a novel and efficient formal verification approach for QNNs. In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints, which can be solved using off-the-shelf solvers. Our encoding is both sound and complete. We demonstrate the application of our approach on local robustness verification and maximum robustness radius computation. We implement our approach in a prototype tool QVIP and conduct a thorough evaluation. Experimental results on QNNs with different quantization bits confirm the effectiveness and efficiency of our approach, e.g., two orders of magnitude faster and able to solve more verification tasks in the same time limit than the state-of-the-art methods.
翻译:深层次的学习已成为软件开发中一个充满希望的编程模式,因为它在解决许多具有挑战性的任务方面表现令人惊讶。深神经网络(DNNs)在实际中日益得到部署,但由于对计算能力的需求,在资源有限的装置上却受到限制。量化已经成为一种很有希望的方法,可以减少DNs的规模,其精确度与其浮点数对应方相当。由此产生的量化神经网络(QNNs)可以节能地实施。类似于其浮动点数对应方,对QNNs的质量保证技术,例如测试和正式核查,是不可或缺的,但目前探索得较少。在这项工作中,我们提出了一种新的和高效率的正式核查方法。我们首先提出一种编码,将QVIPs的核查问题降低到整齐线性限制的解决。我们用现成的解算法既合理又完整。我们用本地稳健性核查和最大坚固度半径计算的方法是不可或缺的。我们用原型的QVIP方法对QPs标准进行了新的和彻底的测试。我们用电子效率的方法对QQQ的快速化方法进行了更快速的测试。我们用一个方法对QQQs 和精确的测试。