Neural networks are getting better accuracy with higher energy and computational cost. After quantization, the cost can be greatly saved, and the quantized models are more hardware friendly with acceptable accuracy loss. On the other hand, recent research has found that neural networks are vulnerable to adversarial attacks, and the robustness of a neural network model can only be improved with defense methods, such as adversarial training. In this work, we find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models. To minimize both the adversarial and the quantization losses simultaneously and to make the quantized model robust, we propose a layer-wise adversarial-aware quantization method, using the Lipschitz constant to choose the best quantization parameter settings for a neural network. We theoretically derive the losses and prove the consistency of our metric selection. The experiment results show that our method can effectively and efficiently improve the robustness of quantized adversarially-trained neural networks.
翻译:神经网络越来越精准, 能量和计算成本更高。 量化后, 成本可以大大节省, 量化模型更方便硬件, 准确损失可以接受。 另一方面, 最近的研究发现, 神经网络很容易受到对抗性攻击, 神经网络模型的坚固性只能通过防御方法( 如对抗性培训) 得到改善。 在这项工作中, 我们发现, 对抗性训练的神经网络比普通模型更容易遭受量化损失。 为了同时尽量减少对抗性和量化损失, 并使量化模型变得坚固, 我们建议一种从层到层的对抗性对称量化方法, 使用利普西茨常数来选择神经网络的最佳量化参数设置 。 我们从理论上推断损失, 并证明我们参数选择的一致性。 实验结果显示, 我们的方法可以有效和高效地提高四分对抗性测试的神经网络的稳健性。