Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness. The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets. However, there is a trade-off between accuracy and latency. In order to achieve high precision as original ANNs, a long simulation time is needed to match the firing rate of a spiking neuron with the activation value of an analog neuron, which impedes the practical application of SNN. In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps). We start by theoretically analyzing ANN-to-SNN conversion and show that scaling the thresholds does play a similar role as weight normalization. Instead of introducing constraints that facilitate ANN-to-SNN conversion at the cost of model capacity, we applied a more direct way by optimizing the initial membrane potential to reduce the conversion loss in each layer. Besides, we demonstrate that optimal initialization of membrane potentials can implement expected error-free ANN-to-SNN conversion. We evaluate our algorithm on the CIFAR-10, CIFAR-100 and ImageNet datasets and achieve state-of-the-art accuracy, using fewer time-steps. For example, we reach top-1 accuracy of 93.38\% on CIFAR-10 with 16 time-steps. Moreover, our method can be applied to other ANN-SNN conversion methodologies and remarkably promote performance when the time-steps is small.
翻译:由于电耗低、生物可观性和对抗性强强度等特性不同,对精密的神经网络(SNN)具有极大的重要性。对深层SNN的训练最有效的方法是通过ANN到SNN的转换,这种转换在深网络结构和大型数据集中取得了最佳的性能。然而,在精确度和延缓度之间有一个权衡。为了达到与原ANN的高度精确度,需要很长的模拟时间来匹配一个跳动神经系统的发射速度,而模拟神经系统的启动值则阻碍SNNN的实际应用。在本文件中,我们的目标是通过高性能转换SNNNNNN的转换,而其使用极低的长度(低于32个时间级 ) 。我们首先从理论上分析ANNNE到SNNNN的转换,并表明提高阈阈值的作用与重量正常化的作用相似。我们不是引入便利ANNE到SNNN的转换的制约,而是更直接地运用了一种方法,即优化最初的MBA-NNUR的变换时间潜力,然后用我们最短的模型来降低预期的IM的IM-RISAR的数据。此外,我们可以用最短的转换方法来评估我们最短的S-CRIS-CR-CRRA-CRA-CR的转换。