Although Deep Neural Networks (DNNs) have been widely applied in various real-world scenarios, they are vulnerable to adversarial examples. The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms. Compared with digital attacks, which generate perturbations in the digital pixels, physical attacks are more practical in the real world. Owing to the serious security problem caused by physically adversarial examples, many works have been proposed to evaluate the physically adversarial robustness of DNNs in the past years. In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision. To establish a taxonomy, we organize the current physical attacks from attack tasks, attack forms, and attack methods, respectively. Thus, readers can have a systematic knowledge of this topic from different aspects. For the physical defenses, we establish the taxonomy from pre-processing, in-processing, and post-processing for the DNN models to achieve full coverage of the adversarial defenses. Based on the above survey, we finally discuss the challenges of this research field and further outlook on the future direction.
翻译:虽然深度神经网络(DNN)已广泛应用于各种实际应用中,但它们容易受到对抗性样本的攻击。根据不同的攻击形式,当前计算机视觉中的对抗攻击可以分为数字攻击和物理攻击。与数字攻击相比,物理攻击在真实世界中更为实用。由于物理攻击所引起的严重安全问题,很多研究已经提出来评估DNN的物理对抗鲁棒性。本文总结了当前计算机视觉中关于物理对抗攻击和物理对抗防御的调查。为了建立分类法,我们从攻击任务、攻击形式和攻击方法三个方面组织了当前的物理攻击。因此,读者可以从不同的角度系统地了解这个主题。对于物理防御,我们从DNN模型的预处理,处理中和后处理三个方面建立分类法,以实现对对抗防御的全面覆盖。基于上述调查,我们最后讨论了这个研究领域的挑战以及未来方向的展望。