Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied. In this paper, we propose a Half-Neighbor Masked Projected Gradient Descent (HNM-PGD) based attack, which can generate strong perturbation to fool different kinds of detectors under strict constraints. We also applied the proposed HNM-PGD attack in the CIKM 2020 AnalytiCup Competition, which was ranked within the top 1% on the leaderboard. We release the code at https://github.com/YanghaoZYH/HNM-PGD.
翻译:虽然对深层学习的分类师进行了大量的对抗性攻击,但很少研究如何攻击物体探测系统。在本文中,我们提议以半邻居蒙面的预测渐变后裔(HNM-PGD)为基础进行攻击,这可能造成强烈干扰,在严格限制下愚弄不同种类的探测器。我们还在CIKM 2020 AnalytiCup比赛中应用了拟议的HNM-PGD攻击,该比赛在领先板上排名前1 % 。我们在https://github.com/YanghaoZYH/HNM-PGD上发布了代码。