Adversarial data examples have drawn significant attention from the machine learning and security communities. A line of work on tackling adversarial examples is certified robustness via randomized smoothing that can provide a theoretical robustness guarantee. However, such a mechanism usually uses floating-point arithmetic for calculations in inference and requires large memory footprints and daunting computational costs. These defensive models cannot run efficiently on edge devices nor be deployed on integer-only logical units such as Turing Tensor Cores or integer-only ARM processors. To overcome these challenges, we propose an integer randomized smoothing approach with quantization to convert any classifier into a new smoothed classifier, which uses integer-only arithmetic for certified robustness against adversarial perturbations. We prove a tight robustness guarantee under L2-norm for the proposed approach. We show our approach can obtain a comparable accuracy and 4x~5x speedup over floating-point arithmetic certified robust methods on general-purpose CPUs and mobile devices on two distinct datasets (CIFAR-10 and Caltech-101).
翻译:Aversarial数据实例引起了机器学习和安全界的极大注意。处理对抗性实例的工作是通过随机滑动来证明稳健性,这可以提供理论稳健性的保证。然而,这种机制通常使用浮点算法来计算推论,需要大量的记忆足迹和令人生畏的计算费用。这些防御模型无法在边缘设备上有效运行,也不能部署在像Turing Tensor Cores或仅供整点ARM处理器这样的单数逻辑单位上。为了克服这些挑战,我们建议采用整点随机滑动方法,将任何分类转换成新的平滑的分类器,在两个不同的数据集(CIFAR-10和Caltech-101)上使用仅供整点算法来验证稳健性。我们证明,在L2-norm下,我们的方法具有很强的稳健性保证。我们表明,我们的方法可以取得可比的准确性和4x5x速度,比得在两个不同的数据集(CIFAR-10和Caltech-101)上浮点验证的计算机和移动装置的稳健性方法。我们建议采用4xy~5x速度。