We propose a new learning algorithm to train spiking neural networks (SNN) using conventional artificial neural networks (ANN) as proxy. We couple two SNN and ANN networks, respectively, made of integrate-and-fire (IF) and ReLU neurons with the same network architectures and shared synaptic weights. The forward passes of the two networks are totally independent. By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN. We applied the proposed proxy learning to deep convolutional SNNs and evaluated it on two benchmarked datasets of Fashion-MNIST and Cifar10 with 94.56% and 93.11% classification accuracy, respectively. The proposed networks could outperform other deep SNNs trained with tandem learning, surrogate gradient learning, or converted from deep ANNs. Converted SNNs require long simulation times to reach reasonable accuracies while our proxy learning leads to efficient SNNs with much smaller simulation times. The source codes of the proposed method are publicly available at https://github.com/SRKH/ProxyLearning.
翻译:我们建议使用传统人工神经网络(ANN)来培训神经神经网络(SNN),以传统人工神经网络(ANN)作为代理。我们将两个SNN和ANN网络(ANN)分别合并为一体化和火(IF)和RELU神经网络(RLU),由同一网络架构和共享合成重量组成。这两个网络的前路是完全独立的。如果假设IF神经网络以比率编码作为RELU的近似值,我们将在代理ANN更新共享重量时对SNN的错误进行反推,只需用SNN取代AN的最后输出。我们把拟议的代理学习应用到深革命型SNNNP(I)和ANN(ANN)网络),然后用两个基准数据集(Fashinchon-MNIST(I)和Cifar10)分别用94.56%和93.11%的分类精度进行评估。提议的网络可以比其他经过同步学习、模拟梯度学习或从深ANNS(NNW)培训的深度SNNP(S)的深层SNNP(S)错误学习后,则需要很长的模拟时间来进行模拟,以便模拟,以达到合理的合理的模拟,同时进行合理的模拟,而我们的代理学习。