Adversarial attacks in the physical world can harm the robustness of detection models. Evaluating the robustness of detection models in the physical world can be challenging due to the time-consuming and labor-intensive nature of many experiments. Thus, virtual simulation experiments can provide a solution to this challenge. However, there is no unified detection benchmark based on virtual simulation environment. To address this challenge, we proposed an instant-level data generation pipeline based on the CARLA simulator. Using this pipeline, we generated the DCI dataset and conducted extensive experiments on three detection models and three physical adversarial attacks. The dataset covers 7 continuous and 1 discrete scenes, with over 40 angles, 20 distances, and 20,000 positions. The results indicate that Yolo v6 had strongest resistance, with only a 6.59% average AP drop, and ASA was the most effective attack algorithm with a 14.51% average AP reduction, twice that of other algorithms. Static scenes had higher recognition AP, and results under different weather conditions were similar. Adversarial attack algorithm improvement may be approaching its 'limitation'.
翻译:物理世界中的对抗攻击可能会损害检测模型的鲁棒性。由于许多实验耗时且费力,因此在物理世界中评估检测模型的鲁棒性可能具有挑战性。因此,虚拟模拟实验可以提供解决方案。然而,还没有基于虚拟模拟环境的统一检测基准。为了解决这一问题,我们提出了一个基于CARLA模拟器的即时级数据生成流水线。使用该流水线,我们生成了DCI数据集,并在三个检测模型和三个物理对抗攻击上进行了广泛的实验。该数据集涵盖7个连续的和1个离散的场景,超过40个角度,20个距离和20,000个位置。结果表明,Yolo v6具有最强的抗力,平均AP降幅仅为6.59%,而ASA是最有效的攻击算法,平均AP降幅为14.51%,是其他算法的两倍。静态场景具有更高的识别AP,不同天气条件下的结果相似。对抗攻击算法的改进可能已经接近“限制”。