Deep neural networks (DNNs) are vulnerable to adversarial examples that are carefully designed to cause the deep learning model to make mistakes. Adversarial examples of 2D images and 3D point clouds have been extensively studied, but studies on event-based data are limited. Event-based data can be an alternative to a 2D image under high-speed movements, such as autonomous driving. However, the given adversarial events make the current deep learning model vulnerable to safety issues. In this work, we generate adversarial examples and then train the robust models for event-based data, for the first time. Our algorithm shifts the time of the original events and generates additional adversarial events. Additional adversarial events are generated in two stages. First, null events are added to the event-based data to generate additional adversarial events. The perturbation size can be controlled with the number of null events. Second, the location and time of additional adversarial events are set to mislead DNNs in a gradient-based attack. Our algorithm achieves an attack success rate of 97.95\% on the N-Caltech101 dataset. Furthermore, the adversarial training model improves robustness on the adversarial event data compared to the original model.
翻译:深心神经网络(DNNS) 容易受到为促成深层次学习模式而精心设计的对抗性例子的伤害。2D图像和3D点云的反比实例已经进行了广泛研究,但对事件数据的研究有限。事件数据可以是高速移动(如自主驾驶)下的2D图像的替代数据。但是,特定敌对事件使得目前的深层次学习模式易受安全问题的影响。在这项工作中,我们首次生成了对抗性实例,然后为事件数据培训了强有力的模型。我们的算法改变了最初事件的时间,并产生了额外的对抗性事件。其他的对抗性事件分两个阶段产生。首先,在事件数据中添加了无效事件,以产生更多的对抗性事件。通过无效事件的数量来控制扰动性大小。第二,额外对抗性事件的位置和时间被设定为在梯度攻击中误导DN-Caltech101模型中误导DNNN-Caltech101数据设置的进攻性成功率。此外,对抗性培训模型提高了原始数据对敌对性数据的可靠性。