Deep neural networks (DNNs) have achieved great success in many tasks. Therefore, it is crucial to evaluate the robustness of advanced DNNs. The traditional methods use stickers as physical perturbations to fool the classifiers, which is difficult to achieve stealthiness and there exists printing loss. Some new types of physical attacks use light beam to perform attacks (e.g., laser, projector), whose optical patterns are artificial rather than natural. In this work, we study a new type of physical attack, called adversarial catoptric light (AdvCL), in which adversarial perturbations are generated by common natural phenomena, catoptric light, to achieve stealthy and naturalistic adversarial attacks against advanced DNNs in physical environments. Carefully designed experiments demonstrate the effectiveness of the proposed method in simulated and real-world environments. The attack success rate is 94.90% in a subset of ImageNet and 83.50% in the real-world environment. We also discuss some of AdvCL's transferability and defense strategy against this attack.
翻译:深心神经网络(DNN)在许多任务中取得了巨大成功。 因此,评估先进的DNN的强健性至关重要。 传统方法使用粘贴物作为物理扰动来愚弄分类者,这很难实现隐形,而且存在印刷损失。 一些新型的物理攻击使用光束来进行攻击(例如激光、投影仪),光学模式是人为的,而不是自然的。 在这项工作中,我们研究一种新型的物理攻击,称为对抗性阴道光(AdvCL),在这种攻击中,由常见的自然现象、阴道光产生对抗性干扰,以便在物理环境中对高级DNNN进行隐形和自然对抗性对抗性攻击。经过仔细设计的实验表明在模拟和实际环境中拟议的方法的有效性。攻击成功率为94.90%,在图像网的子集和现实世界环境中为83.50%。我们还讨论AdvCL对这次攻击的一些可转移性和防御战略。