Long training time hinders the potential of the deep Spiking Neural Network (SNN) with the online learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the online Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the SNN. Our connection pruning approach was evaluated on a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up in the online learning and reduced the network connectivity by 92.83%. The energy consumption in the online learning was saved by 64%. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy saved in the inference. Meanwhile, the classification accuracy remains the same as our non-pruning baseline on the Caltech 101 dataset. In addition, we developed an event-driven hardware architecture on the Field Programmable Gate Array (FPGA) platform that efficiently incorporates our proposed connection pruning approach while incurring as little as 0.56% power overhead. Moreover, we performed a comparison between our work and the existing works on connection pruning for SNN to highlight the key features of each approach. To the best of our knowledge, our work is the first to propose a connection pruning algorithm that can be applied during the online STDP-based learning for a deep SNN with the TTFS coding.
翻译:长期培训时间阻碍了深思普丁神经网络(SNN)的潜力,因为其在线学习能力将在嵌入系统硬件上实现。我们的工作建议采用一种新的连接修剪方法,在网上Spiking自依赖性可塑性(STDP)学习中,优化SNN的学习时间和网络连通性。我们的连接修剪方法在深思SNN(SNNN)与第一个斯普特(TTFFFS)的编码中进行了评价,并成功地实现了在线学习的2.1x速度提升,并将网络连通性减少了92.83%。在线学习的能源消耗节省了64 % 。此外,在线学习的连接速度加快了2.83x速度和78.24%的节能。与此同时,分类准确性与我们在Caltech 101数据集上的非运行基线保持相同。此外,我们在基于实地程序的STFGA平台上开发了一个由事件驱动的硬件架构,有效地结合了我们提议的连接方法,同时将连接率降低为0.56 %的电源管理。此外,我们还实现了2.83x速度速度速度的节节节率。此外,我们在SNNPLS运行的每个工作上都提议了一个在线连接。