Long training time hinders the potential of the deep, large-scale Spiking Neural Network (SNN) with the on-chip learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the on-chip Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the deep SNN. We applied our approach to a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up and 64% energy savings in the on-chip learning and reduced the network connectivity by 92.83%, without incurring any accuracy loss. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy savings in the inference. Evaluation of our proposed approach on the Field Programmable Gate Array (FPGA) platform revealed 0.56% power overhead was needed to implement the pruning algorithm.
翻译:长期的培训时间阻碍了深层、大型的Spiking神经网络的潜力,因为芯片学习能力将在嵌入的系统硬件上实现。我们的工作提出了一种新的连接运行方法,可以在尖尖尖峰定时可依赖的可塑性(STDP)上学习期间应用,以优化深层SNN的学习时间和网络连接。我们运用了我们的方法,在“第一尖尖尖时间”编码时,对深层的SNNN进行了深层的编码,并在芯片学习中成功地实现了2.1x速度和64%的节能,并将网络连接减少了92.83%,而没有造成任何精确损失。此外,在推断中,连接速度下降2.83x速度和78.24%的节能效果。对外地可编程门Array(FPGA)平台上的拟议方法的评估显示,需要0.56%的电源管理费来执行修算法。