Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware. However, it is a challenge to efficiently train SNNs due to their non-differentiability. Most existing methods either suffer from high latency (i.e., long simulation time steps), or cannot achieve as high performance as Artificial Neural Networks (ANNs). In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency. First, we encode the spike trains into spike representation using (weighted) firing rate coding. Based on the spike representation, we systematically derive that the spiking dynamics with common neural models can be represented as some sub-differentiable mapping. With this viewpoint, our proposed DSR method trains SNNs through gradients of the mapping and avoids the common non-differentiability problem in SNN training. Then we analyze the error when representing the specific mapping with the forward computation of the SNN. To reduce such error, we propose to train the spike threshold in each layer, and to introduce a new hyperparameter for the neural models. With these components, the DSR method can achieve state-of-the-art SNN performance with low latency on both static and neuromorphic datasets, including CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10.
翻译:Spik Spik Neural 网络(SNN) 是一个很有希望的节能的AI 模型,在对神经变异硬件实施时,它是一个很有希望的节能的 AI 模型。然而,由于SNN 系统没有差异,因此高效地培训SNNS是一个挑战。大多数现有方法要么存在高悬浮(即长模拟时间步骤),要么无法达到人造神经网络(ANNS)那样高的性能。在本文中,我们建议了Spik Spik Spik Neal 网络(SNNNN)的差别化方法,该方法可以达到高性能,对ANNP具有竞争力,但含低持久性。首先,我们用SNNPN(加权)的发射率编码将峰值列列列列列列列列成峰值。根据峰值显示的显示,我们系统地推断,带有共同神经神经模型的闪烁动态动态动态能代表S-RODR 的每层,我们提议用S-ramp 10 快速模型来训练S-ral 。