It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decisionmakers by attaching adversarial patches to real objects. However, most of the existing works on physical adversarial attacks focus on static objects such as glass frames, stop signs and images attached to cardboard. In this work, we proposed adversarial T-shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person's pose changes. To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts. We show that the proposed method achieves74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2. In contrast, the state-of-the-art physical attack method to fool a person detector only achieves 18% attack success rate. Furthermore, by leveraging min-max optimization, we extend our method to the ensemble attack setting against two object detectors YOLO-v2 and Faster R-CNN simultaneously.
翻译:众所周知,深神经网络(DNN)很容易受到对抗性攻击。所谓的物理对抗性实例通过将对抗性补丁附在真实物体上,欺骗了DNN的决策者。然而,关于物理对抗性攻击的现有工作大多侧重于静态物体,如玻璃框、停止标志和纸板上附图象。在这项工作中,我们提议了对抗性T恤衫,这是躲避人探测器的强健的身体对抗性对抗性例子,即使由于移动人的身体变化,它可能发生非硬性变形。据我们所知,这是为设计关于T恤等硬性物体的物理对抗性攻击例子而设计变形效果的首项工作。我们表明,拟议的方法分别达到74%和57%,在数字和物理世界中分别对YOLOv2的打击成功率。相比之下,最先进的实际攻击方法使一个人的探测器愚弄,但只达到18%的攻击成功率。此外,我们利用微轴优化,将我们的方法推广到对两个天文2和天文2同时对天体探测器进行攻击。