Spiking neural networks (SNNs) are well known as the brain-inspired models with high computing efficiency, due to a key component that they utilize spikes as information units, close to the biological neural systems. Although spiking based models are energy efficient by taking advantage of discrete spike signals, their performance is limited by current network structures and their training methods. As discrete signals, typical SNNs cannot apply the gradient descent rules directly into parameters adjustment as artificial neural networks (ANNs). Aiming at this limitation, here we propose a novel method of constructing deep SNN models with knowledge distillation (KD) that uses ANN as teacher model and SNN as student model. Through ANN-SNN joint training algorithm, the student SNN model can learn rich feature information from the teacher ANN model through the KD method, yet it avoids training SNN from scratch when communicating with non-differentiable spikes. Our method can not only build a more efficient deep spiking structure feasibly and reasonably, but use few time steps to train whole model compared to direct training or ANN to SNN methods. More importantly, it has a superb ability of noise immunity for various types of artificial noises and natural signals. The proposed novel method provides efficient ways to improve the performance of SNN through constructing deeper structures in a high-throughput fashion, with potential usage for light and efficient brain-inspired computing of practical scenarios.
翻译:脉冲神经网络(SNNs)作为一种仿生学习模型,具有高效的计算性能,这得益于SNNs的关键组成部分:使用脉冲作为信息单元,这与生物神经系统非常接近。虽然基于脉冲的模型通过利用离散脉冲信号具有节能高效的特点,但它们的性能受限于当前的网络结构和训练方法。由于典型的SNNs不能直接将梯度下降规则直接应用于参数调整,因此我们提出了一种新颖的构建深度脉冲神经网络的知识蒸馏方法,即使用人工神经网络ANN作为教师模型和SNNs作为学生模型。通过ANN-SNN联合训练算法,学生SNNs模型可以通过知识蒸馏方法从教师ANNs模型中学习到丰富的特征信息,而避免了直接在非可微脉冲中进行SNN的训练。相较于直接训练或采用ANN到SNNs的方法,我们的方法可以更有效地构建深度脉冲结构,但只用较少的时间步长来训练整个模型。更重要的是,它具有对各种人造噪声和自然信号噪声的出色噪声免疫能力。该提出的新方法提供了通过以高吞吐量的方式构建更高效的深度脉冲模型的有效方法,并具有在实际场景中使用的轻便高效的大脑启发式计算的潜力。