Spiking neural networks (SNNs) have been gaining interest as energy-efficient alternatives of conventional artificial neural networks (ANNs) due to their event-driven computation. Considering the future deployment of SNN models to constrained neuromorphic devices, many studies have applied techniques originally used for ANN model compression, such as network quantization, pruning, and knowledge distillation, to SNNs. Among them, existing works on knowledge distillation reported accuracy improvements of student SNN model. However, analysis on energy efficiency, which is also an important feature of SNN, was absent. In this paper, we thoroughly analyze the performance of the distilled SNN model in terms of accuracy and energy efficiency. In the process, we observe a substantial increase in the number of spikes, leading to energy inefficiency, when using the conventional knowledge distillation methods. Based on this analysis, to achieve energy efficiency, we propose a novel knowledge distillation method with heterogeneous temperature parameters. We evaluate our method on two different datasets and show that the resulting SNN student satisfies both accuracy improvement and reduction of the number of spikes. On MNIST dataset, our proposed student SNN achieves up to 0.09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method. We also compare the results with other SNN compression techniques and training methods.
翻译:作为常规人工神经网络(ANN)的节能替代品,Spik神经网络(SNNs)越来越受到人们的兴趣,因为常规人工神经网络(ANNs)的能源效率替代品是其由事件驱动的计算。考虑到未来将SNN模型用于限制神经形态装置,许多研究应用了原用于ANN模型压缩的技术,例如网络量化、剪裁和知识蒸馏等,应用到SNNs。其中,现有的知识蒸馏工作报告了学生SNN模式的准确性改进。然而,关于能源效率的分析(这也是SNNN的一个重要特征)却缺乏。在本文中,我们从准确性和能源效率的角度彻底分析了蒸馏的SNNN模型的性能。在这个过程中,我们观察到,在使用常规知识蒸馏方法时,导致能源效率低下的峰值数量大幅增加。根据这项分析,我们提出了一种具有不同温度参数的新的知识蒸馏方法。我们从两个不同的数据集上评价了我们的方法,并显示SNNNS学生的精确性改进和减少钉子数量。我们还看到,我们用SNMNPS的升级方法比了60学生的升级方法,我们用普通的S-NISNF数据,我们用不到的S的S-rGNIS数据,我们用了更高的方法,我们用不到的S-rgNFS-rgNPDLD的数据,我们用了65的升级方法,我们用了比了比了比了其他的S的精确性的方法。我们用了比了比了65的升级的方法。