Adversarial examples represent a serious threat for deep neural networks in several application domains and a huge amount of work has been produced to investigate them and mitigate their effects. Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models. This paper presents CARLA-GeAR, a tool for the automatic generation of photo-realistic synthetic datasets that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods. The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving. The adversarial patches included in the generated datasets are attached to billboards or the back of a truck and are crafted by using state-of-the-art white-box attack strategies to maximize the prediction error of the model under test. Finally, the paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world. All the code and datasets used in this paper are available at http://carlagear.retis.santannapisa.it.
翻译:Adversarial 实例对若干应用领域的深神经网络构成了严重威胁,而且已经为调查这些网络并减轻其影响做了大量工作,然而,没有专门为评价神经模型的对抗性坚固性而专门设计的数据集的生成做大量工作。本文展示了CARLA-GeAR,这是自动生成摄影现实合成数据集的工具,可用于系统评估神经模型对物理对立补丁的对抗性坚固性,以及用于比较不同对抗性防御/探测方法的性能。该工具建在CARLA模拟器上,使用其Python API,允许在自主驱动的背景下为若干视觉任务生成数据集。生成的数据集中包含的对抗性合成数据集附于广告板或卡车背面,并且通过使用最先进的白箱攻击战略来尽量扩大测试中模型的预测错误。最后,该文件介绍了一项实验性研究,用以评估一些国防工具的性模型的性能,用于全球防御性标定的模型,展示了用于这种攻击的常规性AAR-RABA使用的任何防御方法。