Spiking neural networks (SNNs) are well suited for resource-constrained applications as they do not need expensive multipliers. In a typical rate-encoded SNN, a series of binary spikes within a globally fixed time window is used to fire the neurons. The maximum number of spikes in this time window is also the latency of the network in performing a single inference, as well as determines the overall energy efficiency of the model. The aim of this paper is to reduce this while maintaining accuracy when converting ANNs to their equivalent SNNs. The state-of-the-art conversion schemes yield SNNs with accuracies comparable with ANNs only for large window sizes. In this paper, we start with understanding the information loss when converting from pre-existing ANN models to standard rate-encoded SNN models. From these insights, we propose a suite of novel techniques that together mitigate the information lost in the conversion, and achieve state-of-art SNN accuracies along with very low latency. Our method achieved a Top-1 SNN accuracy of 98.73% (1 time step) on the MNIST dataset, 76.38% (8 time steps) on the CIFAR-100 dataset, and 93.71% (8 time steps) on the CIFAR-10 dataset. On ImageNet, an SNN accuracy of 75.35%/79.16% was achieved with 100/200 time steps.
翻译:Spik神经网络(SNNS)非常适合资源限制的应用,因为它们不需要昂贵的乘数。 在典型的速率编码 SNN 中,全球固定时间窗口内的一系列二进制钉钉钉钉用于释放神经元。这个时窗口的最大钉钉钉数也是网络在进行单一推断时的延迟度,并且决定了模型的总体能源效率。本文件的目的是在将ANNS转换为与其相当的 SNNS时降低这一精确度。 最先进的转换计划使得SNNS的精度与ANNS相比只有大窗口尺寸。 在本文中,我们首先理解信息损失,从原的ANN模式转换为标准速率编码SNN模型。 根据这些看法,我们提出一套新技术,在转换过程中减少信息损失,在将SNNNNP转化为相应的等量。 我们的方法在98. 73% SNNNS-NS-NFAR 的精确度上实现了T1级S-NFAR精确度的顶端点,在75.