Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i.e., small perturbations added to the input for inducing a misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies targeted to perturb the event sequences that compose the input of the SNNs. First, we show that noise filters for DVS can be used as defense mechanisms against adversarial attacks. Afterwards, we implement several attacks and test them in the presence of two types of noise filters for DVS cameras. The experimental results show that the filters can only partially defend the SNNs against our proposed DVS-Attacks. Using the best settings for the noise filters, our proposed Mask Filter-Aware Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset and by more than 65% on the MNIST dataset, compared to the original clean frames. The source code of all the proposed DVS-Attacks and noise filters is released at https://github.com/albertomarchisio/DVS-Attacks.
翻译:Spik Spik Neural 网络(SNNS)尽管在神经形态硬件上实施时具有节能性,并且与事件动态视觉传感器(DVS)一起,尽管在神经形态硬件上实施时具有节能效应,但很容易受到安全威胁,例如对抗性攻击,即为诱导错误分类输入而增加的小扰动。为此,我们提议DVS-Attacks,这是一套隐性但高效的对抗性攻击方法,目的是干扰构成SNNNS输入的内容的事件序列。首先,我们显示DVS的噪音过滤器可以用作防御性机制,防止对抗性攻击。随后,我们实施几次攻击,并在DVS摄像机两种类型的噪音过滤器面前测试。实验结果显示,这些过滤器只能部分保护SNNS, 以对抗我们提议的DVS-Atackstacks。我们提议的防噪音过滤器的最佳设置,我们提议的MA-Aweardom Dash攻势将DVS-Grestual 数据集的所有精确度降低20%以上。在DVS-Greetreetal DMS-A的原始数据集上,比RMISmal-S fram 和DMIS fram的原始版本的版本的版本为65。