There is an increasing interest in emulating Spiking Neural Networks (SNNs) on neuromorphic computing devices due to their low energy consumption. Recent advances have allowed training SNNs to a point where they start to compete with traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at the same time being energy efficient when run on neuromorphic hardware. However, the process of training SNNs is still based on dense tensor operations originally developed for ANNs which do not leverage the spatiotemporally sparse nature of SNNs. We present here the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods while being significantly faster and more memory efficient. We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to 150x, and 85% more memory efficient, without losing accuracy.
翻译:由于能源消耗量低,人们越来越有兴趣模拟神经突变神经网络(SNN)对神经形态计算装置的模拟。最近的进展使得培训SNN能够开始在准确性方面与传统人工神经网络(ANN)竞争,同时在使用神经形态硬件运行时节能效率高。然而,培训SNNN的过程仍然以最初为非NN开发的密度高温操作为基础,这些操作没有利用SNN的零星零散性质。我们在这里展示了第一种稀有的SNNE背反剖算法,这种算法在与艺术方法现状相同或更精确,同时速度更快和记忆效率更高。我们展示了我们的方法在复杂程度不同的真实数据集(Fashion-MNIST、Nuromophic-MNIST和Spiking Heiderberg Digits)上的有效性,在不丧失准确性的情况下,在后过150x和85%的记忆效率方面实现加速。