The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks. To achieve this, we model standard and adaptive multiple-timescale spiking neurons as self-recurrent neural units, and leverage surrogate gradients and auto-differentiation in the PyTorch Deep Learning framework to efficiently implement backpropagation-through-time, including learning of the important spiking neuron parameters to adapt our spiking neurons to the tasks.
翻译:大脑激发的神经形态计算作为边缘AI的范例的出现,正促使人们寻找高性能和高效的神经网络来运行于这一硬件上。然而,与深层学习的古典神经网络相比,当前喷发神经网络在紧迫领域缺乏竞争性性能。这里,对于连续和流传的任务,我们展示了一种新型的适应性喷射经常性神经网络(SRNN)与其他喷射神经网络相比,如何能够达到最先进的性能,几乎达到或超过传统经常性神经网络(RNN)的性能,同时展示了稀有的活动。我们从这一点中计算出,在较艰巨的任务中,我们的SRNN对古典神经网络的能量改进超过100美元。要达到这一点,我们将神经网络作为自经常神经单元,在PyToirch深学习框架中将超时梯度和自动差异作为杠杆,以高效地实施反向透时,包括学习重要的神经突变神经参数,以调整我们的神经元适应任务。