Event-based simulations of Spiking Neural Networks (SNNs) are fast and accurate. However, they are rarely used in the context of event-based gradient descent because their implementations on GPUs are difficult. Discretization with the forward Euler method is instead often used with gradient descent techniques but has the disadvantage of being computationally expensive. Moreover, the lack of precision of discretized simulations can create mismatches between the simulated models and analog neuromorphic hardware. In this work, we propose a new exact error-backpropagation through spikes method for SNNs, extending Fast \& Deep to multiple spikes per neuron. We show that our method can be efficiently implemented on GPUs in a fully event-based manner, making it fast to compute and precise enough for analog neuromorphic hardware. Compared to the original Fast \& Deep and the current state-of-the-art event-based gradient-descent algorithms, we demonstrate increased performance on several benchmark datasets with both feedforward and convolutional SNNs. In particular, we show that multi-spike SNNs can have advantages over single-spike networks in terms of convergence, sparsity, classification latency and sensitivity to the dead neuron problem.
翻译:对Spiking神经网络(SNNS)进行的事件模拟是快速和准确的,但是,由于在GPU上执行的难度,很少在基于事件的梯度下降的背景下使用。对前Euler方法的分解通常使用梯度下降技术,但也有计算成本高昂的缺点。此外,离散模拟缺乏精确度,可能会在模拟模型和模拟神经变色硬件之间造成不匹配。在这项工作中,我们建议通过SNS的加注方法,对错误进行新的精确反向分析,将快速 `深'扩大到每个神经的多重峰值。我们表明,我们的方法可以完全基于事件的方式在GPUS上高效地实施,使其快速和精确地用于模拟神经变形硬件。与原始的快度“深度”和当前以事件为基础的梯度-白算法相比,我们展示了几个基准数据集的性能提高,既具有进向性,又具有进向性,又具有进向性,每个神经NNNPNPS。特别是,我们显示,多度的趋同性、SNS-SNQ的趋同性在单级网络上具有优势。