Spiking neural networks (SNNs) with leaky integrate and fire (LIF) neurons, can be operated in an event-driven manner and have internal states to retain information over time, providing opportunities for energy-efficient neuromorphic computing, especially on edge devices. Note, however, many representative works on SNNs do not fully demonstrate the usefulness of their inherent recurrence (membrane potentials retaining information about the past) for sequential learning. Most of the works train SNNs to recognize static images by artificially expanded input representation in time through rate coding. We show that SNNs can be trained for sequential tasks and propose modifications to a network of LIF neurons that enable internal states to learn long sequences and make their inherent recurrence resilient to the vanishing gradient problem. We then develop a training scheme to train the proposed SNNs with improved inherent recurrence dynamics. Our training scheme allows spiking neurons to produce multi-bit outputs (as opposed to binary spikes) which help mitigate the mismatch between a derivative of spiking neurons' activation function and a surrogate derivative used to overcome spiking neurons' non-differentiability. Our experimental results indicate that the proposed SNN architecture on TIMIT and LibriSpeech 100h dataset yields accuracy comparable to that of LSTMs (within 1.10% and 0.36%, respectively), but with 2x fewer parameters than LSTMs. The sparse SNN outputs also lead to 10.13x and 11.14x savings in multiplication operations compared to GRUs, which is generally con-sidered as a lightweight alternative to LSTMs, on TIMIT and LibriSpeech 100h datasets, respectively.
翻译:Spik Neal 网络( SNN) 具有渗漏整合和火灾( LIF) 神经元的 Spik Neal 网络( SNN), 可以以事件驱动的方式运行, 并有内部状态来保留信息, 提供节能神经突变计算的机会, 特别是在边缘设备上。 但是, 注意, SNNS 上的许多有代表性的作品并不能充分显示其内在重现( membrane 潜在保存关于过去的信息)的有用性, 以便进行连续学习。 大多数工作都训练 SNNS, 以人为的方式通过速率编码( LIF ) 来识别静态图像。 我们显示, SNNNF 网络可以接受连续任务, 并提议修改 LIFS 网络的网络, 并且将 SNFS 的常规值( SNIS ) 和 IMIS IMIS 的不具有可比性的数据。