Spiking neural network (SNN) operating with asynchronous discrete events shows higher energy efficiency. A popular approach to implementing deep SNNs is ANN-SNN conversion combining both efficient training of ANNs and efficient inference of SNNs. However, due to the intrinsic difference between ANNs and SNNs, the accuracy loss is usually non-negligible, especially under low simulating steps. It restricts the applications of SNN on latency-sensitive edge devices greatly. In this paper, we identify such performance degradation stems from the misrepresentation of the negative or overflow residual membrane potential in SNNs. Inspired by this, we systematically analyze the conversion error between SNNs and ANNs, and then decompose it into three folds: quantization error, clipping error, and residual membrane potential representation error. With such insights, we propose a dual-phase conversion algorithm to minimize those errors separately. Besides, we show each phase achieves significant performance gains in a complementary manner. We evaluate our method on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet datasets. The experimental results show the proposed method achieves the state-of-the-art in terms of both accuracy and latency with promising energy preservation compared to ANNs. For instance, our method achieves an accuracy of 73.20% on CIFAR-100 in only 2 time steps with 15.7$\times$ less energy consumption.
翻译:Spik Neal 网络(SNNN) 以无孔不入的离散事件运行的Spik Neal 网络(SNN) 显示能效较高。 实施深层的SNNN(ANN-SNN) 的流行方法是ANN-SNN的转换,既对ANNS进行高效的培训,又对SNNS进行高效的推断。然而,由于ANN和SNNN的内在差异,准确性损失通常是不可忽略的,特别是在低模拟步骤下。这极大地限制了SNNN(SNNN)对惯性敏感边缘装置的应用。在本文中,我们发现这种性能退化源自对SNNNN的负或溢出剩余膜潜力的误差。受此启发,我们系统分析SNNNNN和ANNNN(ANN)之间的转换错误,然后将其切换成三个折叠:四分错误、剪裁错误和残余的膜潜在代表错误。我们建议采用双级转换算法,以互补的方式将每个阶段取得显著的性成绩。我们用10、100和图像网络的节能的节能方法对15-100的节能的节能进行对比。我们提议的节能的节能的节能方法和图像网数据。我们仅的计算,只的节制的节制的节能的节制,只显示的节制的节能结果。