Though it is well known that the performance of deep neural networks (DNNs) degrades under certain light conditions, there exists no study on the threats of light beams emitted from some physical source as adversarial attacker on DNNs in a real-world scenario. In this work, we show by simply using a laser beam that DNNs are easily fooled. To this end, we propose a novel attack method called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser beam's physical parameters to perform adversarial attack. Experiments demonstrate the effectiveness of our proposed approach in both digital- and physical-settings. We further empirically analyze the evaluation results and reveal that the proposed laser beam attack may lead to some interesting prediction errors of the state-of-the-art DNNs. We envisage that the proposed $AdvLB$ method enriches the current family of adversarial attacks and builds the foundation for future robustness studies for light.
翻译:众所周知,深神经网络(DNN)在某些光线条件下会退化,但对于某些物理来源在现实世界中作为对DNN的对抗性攻击者而发出的光束威胁没有进行研究。在这项工作中,我们仅仅使用激光光束就表明DNN很容易被愚弄。为此,我们提议了一种叫AdvLB$(AdvLB$)的新颖的攻击方法,它能够操纵激光束的物理参数来进行对抗性攻击。实验表明我们所提议的方法在数字和物理设置方面的有效性。我们进一步从经验上分析评价结果,并揭示拟议的激光束攻击可能导致最先进的DNNNS的令人感兴趣的预测错误。我们设想,拟议的$AdvLB$(AdvLB$)方法可以丰富目前对抗性攻击的大家庭,并为未来的光度研究奠定基础。