In recent years, adversarial attacks have drawn more attention for their value on evaluating and improving the robustness of machine learning models, especially, neural network models. However, previous attack methods have mainly focused on applying some $l^p$ norm-bounded noise perturbations. In this paper, we instead introduce a novel adversarial attack method based on haze, which is a common phenomenon in real-world scenery. Our method can synthesize potentially adversarial haze into an image based on the atmospheric scattering model with high realisticity and mislead classifiers to predict an incorrect class. We launch experiments on two popular datasets, i.e., ImageNet and NIPS~2017. We demonstrate that the proposed method achieves a high success rate, and holds better transferability across different classification models than the baselines. We also visualize the correlation matrices, which inspire us to jointly apply different perturbations to improve the success rate of the attack. We hope this work can boost the development of non-noise-based adversarial attacks and help evaluate and improve the robustness of DNNs.
翻译:近些年来,对抗性攻击在评估和改进机器学习模型,特别是神经网络模型的稳健性方面吸引了更多的重视,然而,以前的攻击方法主要侧重于应用一些以美元为单位的规范噪音扰动。在本文件中,我们采用了基于烟雾的新颖的对抗性攻击方法,这是现实世界景象中常见的现象。我们的方法可以将潜在的对抗性烟雾综合成以大气散布模型为基础的图像,具有高度现实性,误导分类人员预测错误的类别。我们在两个受欢迎的数据集,即图像网和NIPS~2017上进行了实验。我们证明,拟议的方法取得了很高的成功率,并且具有比基线更好的不同分类模型的可转移性。我们还设想了相互关联的矩阵,这激励我们共同应用不同的扰动性来提高攻击的成功率。我们希望这项工作能够促进非以噪音为基础的对抗性攻击的发展,并有助于评价和改进DNN的稳健性。