Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years. These attacks cause the model to make incorrect predictions by placing a patch containing an adversarial pattern on the target object or anywhere within the frame. However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object. In this study, we propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO. In our attack, we use (i) a tailored projection function to enable the placement of the adversarial patch on multiple target objects in the image (e.g., cars), each of which may be located a different distance away from the camera or have a different view angle relative to the camera, and (ii) a unique loss function capable of changing the label of the attacked objects. The proposed universal patch, which is trained in the digital domain, is transferable to the physical domain. We performed an extensive evaluation using different types of object detectors, different video streams captured by different cameras, and various target classes, and evaluated different configurations of the adversarial patch in the physical domain.
翻译:过去几年来,对深学习物体探测器(OD)的Adversari攻击进行了广泛研究,这些攻击使模型作出不正确的预测,在目标物体或框架内任何地方放置了含有对抗模式的补丁,但是,以前没有任何一项研究显示对OD进行分类错误的攻击,在目标物体上应用了补丁。在本研究中,我们提议对最先进的物体探测器YOLO进行新颖的、普遍的、有针对性的标签开关攻击。在我们的攻击中,我们使用了(一)一个定制的投射功能,以便能够在图像中(例如汽车)对多个目标物体放置对立的补丁,每个目标物体都可能离相机有不同距离,或者与相机有不同的角度,以及(二)一个能够改变被攻击物体标签的独特损失功能。在数字领域接受培训的拟议通用补丁可转让到物理领域。我们使用不同种类的物体探测器、不同视频流和不同目标类对立面的物理阵列进行了广泛的评价。