Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6x faster reasoning performance under 0.265x energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.
翻译:Spik Spik Neural Networks(SNN)是生物激励型的节能神经网络,吸引了研究人员和工业界的极大关注。培训深层SNNS的最有效方式是通过ANN-SNN的转换。然而,转换通常会发生准确性损失和长时间推导时间过长,从而妨碍SNN的实际应用。在本文中,我们从理论上分析ANN-SNNN的转换,并得出最佳转换的足够条件。为了更好地将ANN-SNNN更精确地联系起来,我们提议将Reg Nom Teum 取代RLU在源ANN培训中的激活功能,使受过训练的ANN能够直接转换为SNN。此外,我们提出一个最佳的匹配曲线,以量化源ANNN的激活值与目标SNNN的实际发射率之间的匹配性差。我们表明,通过优化订正的ANNW-S-S-SNNN的合适曲线的上限,可以实现快速推理学和更好的结果。我们的理论可以解释关于快速推理论和结果。实验结果显示,拟议的方法在VGGH-NNNNNR-CR-CRx/CRx的S/CRAS/Prealx/Prealx的S/PLRADRAD的精确度结构下,可以更快速推算法下,可以更快的推算。