Adversarial attacks are feasible in the real world for object detection. However, most of the previous works have tried to learn local "patches" applied to an object to fool detectors, which become less effective in squint view angles. To address this issue, we propose the Dense Proposals Attack (DPA) to learn one-piece, physical, and targeted adversarial camouflages for detectors. The camouflages are one-piece because they are generated as a whole for an object, physical because they remain adversarial when filmed under arbitrary viewpoints and different illumination conditions, and targeted because they can cause detectors to misidentify an object as a specific target class. In order to make the generated camouflages robust in the physical world, we introduce a combination of transformations to model the physical phenomena. In addition, to improve the attacks, DPA simultaneously attacks all the classifications in the fixed proposals. Moreover, we build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks. Extensive experiments demonstrate that DPA outperforms the state-of-the-art methods, and it is generic for any object and generalized well to the real world, posing a potential threat to the security-critical computer vision systems.
翻译:反方攻击在现实世界中是可行的,可以进行物体探测。然而,大多数以前的工作都试图学习当地“弹孔”用于愚弄探测器的物体,在表面角度上效果较弱。为了解决这个问题,我们提议用“密集建议攻击(DPA)”(DPA)来学习单人、物理和有目标的对立伪装来进行探测器。迷彩是单一的,因为它们是整个物体产生的,因为它们在任意的视角和不同的照明条件下拍摄时,它们仍然是对立的,而且它们具有针对性,因为它们能引起探测器误辨某物体是特定的目标类别。为了在物理世界中使产生的伪装变得较强,我们采用了一套变异的组合来模拟物理现象。此外,为了改进攻击,DADP同时攻击固定提议中的所有分类。此外,我们用Unity模拟引擎建立一个虚拟的3D场景,以便公平和准确地评价不同的物理攻击。广泛的实验表明,DADADA是超越了最先进的方法,并且它是任何物体和普遍地对真实世界构成威胁的计算机系统的一般。