Object detection, as a fundamental computer vision task, has achieved a remarkable progress with the emergence of deep neural networks. Nevertheless, few works explore the adversarial robustness of object detectors to resist adversarial attacks for practical applications in various real-world scenarios. Detectors have been greatly challenged by unnoticeable perturbation, with sharp performance drop on clean images and extremely poor performance on adversarial images. In this work, we empirically explore the model training for adversarial robustness in object detection, which greatly attributes to the conflict between learning clean images and adversarial images. To mitigate this issue, we propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images. RobustDet also employs the Adversarial Image Discriminator (AID) and Consistent Features with Reconstruction (CFR) to ensure a reliable robustness. Extensive experiments on PASCAL VOC and MS-COCO demonstrate that our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
翻译:作为一项基本的计算机视觉任务,在发现深层神经网络后,物体探测取得了显著进展,然而,很少有工作探索物体探测器对抗对抗性攻击的对抗性强势,以便在各种现实世界情景中实际应用。探测器受到无法察觉的扰动的极大挑战,清洁图像的性能急剧下降,对抗图像的性能极差。在这项工作中,我们实证地探索了物体探测对抗性强力示范培训,这在很大程度上归因于学习清洁图像和对抗图像之间的冲突。为缓解这一问题,我们提议以对抗性觉悟为基础的机器人探测器(Robust Det)将梯度分解为清洁和对抗性图像的模型学习。RobustDet还利用了对立图像分辨器(AID)和与重建的一致特征(CFR)以确保可靠的稳健性。关于PASAL VOC和MS-COCO公司的广泛实验表明,我们的模型有效地分解了梯度,并大大加强了探测强度,以保持对清洁图像的探测能力。