Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS). This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient DVS-noise filtering. We are the first to generate adversarial attacks on DVS signals (i.e., frames of events in the spatio-temporal domain) and to apply noise filters for DVS sensors in the quest for defending against adversarial attacks. Our results show that the noise filters effectively prevent the SNNs from being fooled. The SNNs in our experiments provide more than 90% accuracy on the DVS-Gesture and NMNIST datasets under different adversarial threat models.
翻译:Spiking Neural Networks(SNN)旨在提供节能的学习能力,在使用基于事件动态视觉传感器(DVS)对神经晶片实施时提供节能的学习能力。 本文研究了SNNS对基于DVS的系统进行对抗性攻击的稳健性。 并提出了R-SNN,这是通过高效DVS的声波过滤使SNNS稳健的新方法。 我们是第一个对DVS信号(即时空空间事件框架)产生对抗性攻击的对立性攻击,并且为DVS传感器应用噪音过滤器,以防御对抗敌对性攻击。 我们的结果表明,噪音过滤器有效地防止了SNN受到欺骗。 我们实验中的SNN在不同的对抗性威胁模型下,DVS-Gestur和NMNIST数据集的精确度超过90%。