Visual detection is a key task in autonomous driving, and it serves as one foundation for self-driving planning and control. Deep neural networks have achieved promising results in various computer vision tasks, but they are known to be vulnerable to adversarial attacks. A comprehensive understanding of deep visual detectors' vulnerability is required before people can improve their robustness. However, only a few adversarial attack/defense works have focused on object detection, and most of them employed only classification and/or localization losses, ignoring the objectness aspect. In this paper, we identify a serious objectness-related adversarial vulnerability in YOLO detectors and present an effective attack strategy aiming the objectness aspect of visual detection in autonomous vehicles. Furthermore, to address such vulnerability, we propose a new objectness-aware adversarial training approach for visual detection. Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses on the KITTI and COCO_traffic datasets, respectively. Also, the proposed adversarial defense approach can improve the detectors' robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO_traffic, respectively.
翻译:视觉探测是自主驾驶中的一个关键任务,是自我驾驶规划和控制的一个基础。深神经网络在各种计算机视觉任务中取得了有希望的成果,但众所周知,这些网络很容易受到对抗性攻击。需要全面了解深视觉探测器的脆弱性,才能提高人们的稳健性。然而,只有少数对抗性攻击/防御工作侧重于物体探测,其中多数只使用分类和/或定位损失,忽略了目标性方面。在本文件中,我们确定了YOLO探测器中与目标有关的严重对抗性弱点,并提出了针对自动车辆视觉探测目标方面的有效攻击战略。此外,为了应对这种脆弱性,我们提议采用新的目标认知性对抗性对抗性训练方法进行视觉探测。实验表明,针对物体方面的拟议攻击比KITTI和CO_traffic数据集中的分类和/或定位损失分别有效45.17%和43.50%。此外,拟议的对抗性防御方法可以改进探测器的稳健性,防止以目标为目的的攻击,由21 %和12 %的CO-AP分别到21 %和12 %。