Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs.
翻译:Spik 神经网络(SNNS)因其耗电量低、潜伏率低和生物可辨识性强而吸引了极大关注。由于它们被广泛部署在神经形态装置中,用于低功能大脑激发的计算,安全问题变得日益重要。然而,与深层神经网络(DNNS)相比,SNNS目前缺乏专门设计的防御对抗性攻击的方法。在神经膜潜在振荡的启发下,我们提出了一个新颖的神经模型,其中包括生物激发的振动机制,以加强SNNS的安全。我们的实验显示,具有神经振动神经元的SNNNNS对对抗性攻击的抵抗力比普通的SNNNS神经装置在建筑和数据集种类上更强。此外,我们建议采用一种防御方法来改变模型的梯度,取代最初的培训梯度,混淆攻击者使用`fake'神经梯度的梯度来生成无效的抗争斗样品。我们的实验显示,在S-NF的防御性攻击上,拟议的防御方法能够有效地抵制单步和代防御性研究,而最低的S-defrestratal ro 方法可以确定我们的最佳防御性研究方法,从而可以有效地抵抗单步和S-defregregrestrestrestrestryal攻击。