Spiking neural network (SNN) is a brain-inspired model which has more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. Inspired by Artificial Neural Networks (ANNs) quantization technology, binarized SNN (BSNN) is introduced to solve the memory problem. Due to the lack of suitable learning algorithms, BSNN is usually obtained by ANN-to-SNN conversion, whose accuracy will be limited by the trained ANNs. In this paper, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure the accuracy of the network by evaluating the error caused by the binarized weights during the network learning process. Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy. At the same time, in order to accelerate the training speed of the network, the global average pooling(GAP) layer is introduced to replace the fully connected layers by the combination of convolution and pooling, so that SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only one time step, we still can achieve 92.92 %, 91.63 % ,and 63.54 % testing accuracy on three different datasets, FashionMNIST, CIFAR-10, and CIFAR-100, respectively.
翻译:Spik神经网络(SNN)是一个由大脑启发的模型,它具有更多的时空信息处理能力和计算能效。然而,随着SNN的深度越来越深,由SNN的重量造成的记忆问题逐渐引起关注。在人工神经网络(ANN)量化技术的启发下,采用二进制 SNN(BNN)来解决记忆问题。由于缺乏适当的学习算法,BS63NN通常通过ANN至SNN的转换获得,其准确性将受到受过训练的ANNNS的准确性的限制。在本文件中,我们建议采用超低拉适应性本地二进制神经网络(ALBNNN),其精度估计损失。在人工选择网络层时,通过评价网络学习过程中的二进制重量造成的错误,BNNNNN(B)通常通过AN-S的转换获得,该方法的精确性将存储空间减少20%以上,而不会失去网络的准确性。在同一时间,同时,将S-NBNFR的三进级升级速度,从而加快对服务器的升级速度,从而加快了CRER的升级。