Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
翻译:虽然深神经网络(DNNs)在计算机视觉方面取得了令人印象深刻的成果,但它们暴露在对抗性攻击面前的脆弱性仍然是一个严重关切的问题。一系列工作表明,通过在图像中添加精心设计的扰动,DNNs可能会在性能度量上发生灾难性的退化。这种现象不仅存在于数字空间,而且也存在于物理空间。因此,估计这些基于DNNs的系统的安全性对于在现实世界中安全地部署这些系统至关重要,特别是安全关键应用,例如自主汽车、视频监视和医学诊断。在本文中,我们侧重于物理对抗性攻击,并对现有150多份文件进行全面调查。我们首先澄清了实际对抗性攻击的概念并分析了其特征。然后,我们定义了在物理世界中进行攻击所必需的对抗性攻击的媒介。接下来,我们按任务顺序介绍以身体对抗性攻击方法:分类、检测和再识别,并介绍它们在解决三重力:有效性、隐形性和稳健性。我们最后讨论了当前的挑战和潜在未来方向。