Although Deep Neural Networks (DNNs) have been widely applied in various real-world scenarios, they are vulnerable to adversarial examples. The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms. Compared with digital attacks, which generate perturbations in the digital pixels, physical attacks are more practical in the real world. Owing to the serious security problem caused by physically adversarial examples, many works have been proposed to evaluate the physically adversarial robustness of DNNs in the past years. In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision. To establish a taxonomy, we organize the current physical attacks from attack tasks, attack forms, and attack methods, respectively. Thus, readers can have a systematic knowledge of this topic from different aspects. For the physical defenses, we establish the taxonomy from pre-processing, in-processing, and post-processing for the DNN models to achieve full coverage of the adversarial defenses. Based on the above survey, we finally discuss the challenges of this research field and further outlook on the future direction.
翻译:虽然深神经网络(DNN)被广泛应用于各种现实世界的情景中,但它们很容易受到对抗性的例子的影响。目前计算机视觉中的对抗性攻击可以按照不同的攻击形式分为数字攻击和物理攻击。与数字攻击相比,物理攻击在现实世界中更为实际。由于实际对抗性例子造成的严重安全问题,许多工作已经提出来评估过去几年中DNN的物理对抗性强力。我们本文总结了对当前实物对抗性攻击的调查和计算机视觉中的实物对抗性防御。为了建立分类学,我们分别从攻击任务、攻击形式和攻击方法来组织目前的人身攻击。因此,读者可以从不同方面系统地了解这个主题。关于物理防御,我们从预处理、处理和后处理中建立DNN模型的分类学,以便全面覆盖对抗性防御。根据上述调查,我们最后讨论了这一研究领域的挑战和对未来方向的展望。</s>