Reducing the size of neural network models is a critical step in moving AI from a cloud-centric to an edge-centric (i.e. on-device) compute paradigm. This shift from cloud to edge is motivated by a number of factors including reduced latency, improved security, and higher flexibility of AI algorithms across several application domains (e.g. transportation, healthcare, defense, etc.). However, it is currently unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks. This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks. Specifically, we investigate and model the accuracy of quantized neural networks on adversarially-perturbed images. Results indicate that for simple gradient-based attacks, quantization can either improve or degrade adversarial robustness depending on the attack strength.
翻译:缩小神经网络模型的规模是将人工智能从云中心向边缘中心(即机能装置)计算模式转变的关键一步。这种从云向边缘的转变是由若干因素推动的,这些因素包括:延迟性降低、安全性提高,以及人工智能算法在若干应用领域(如运输、医疗、防御等)的更大灵活性。然而,目前尚不清楚模型压缩技术如何影响人工智能算法对对抗性攻击的稳健性。本文探讨了最常用的压缩技术之一,即量化化对神经网络的对抗性稳健性的影响。具体地说,我们调查并模拟对立性对立性干扰图像上的定量神经网络的准确性。结果显示,对于简单的梯度攻击,根据攻击强度,四分化既可以改进,也可以降低对抗性强性。