Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. Most of the attacks proposed have targeted the model's integrity (i.e., caused the model to make incorrect predictions), while adversarial attacks targeting the model's availability, a critical aspect in safety-critical domains such as autonomous driving, have not yet been explored by the machine learning research community. In this paper, we propose a novel attack that negatively affects the decision latency of an end-to-end object detection pipeline. We craft a universal adversarial perturbation (UAP) that targets a widely used technique integrated in many object detector pipelines -- non-maximum suppression (NMS). Our experiments demonstrate the proposed UAP's ability to increase the processing time of individual frames by adding "phantom" objects that overload the NMS algorithm while preserving the detection of the original objects (which allows the attack to go undetected for a longer period of time).
翻译:过去几年来,对深学习天体探测器的反向攻击进行了广泛研究,大多数拟议的攻击都针对模型的完整性(即造成模型作出不正确的预测),而机器学习研究界尚未探索该模型是否存在的对抗性攻击,这是自动驾驶等安全关键领域的一个关键方面。在本文中,我们提议进行新的攻击,对端对端物体探测管道的决定延迟性产生消极影响。我们设计了一个普遍的对抗性干扰(UAP),针对许多物体探测器管道中广泛使用的技术 -- -- 非最大抑制(NMS) -- -- 我们的实验表明,拟议的UAP能够增加单个框架的处理时间,增加“幻影”物体,超载NMS算法,同时保留原始物体的探测(使攻击能够在较长的时间内不被发现)。