We propose a new learning algorithm to train spiking neural networks (SNN) using conventional artificial neural networks (ANN) as proxy. We couple two SNN and ANN networks, respectively, made of integrate-and-fire (IF) and ReLU neurons with the same network architectures and shared synaptic weights. The forward passes of the two networks are totally independent. By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN. We applied the proposed proxy learning to deep convolutional SNNs and evaluated it on two benchmarked datasets of Fahion-MNIST and Cifar10 with 94.56% and 93.11% classification accuracy, respectively. The proposed networks could outperform other deep SNNs trained with tandem learning, surrogate gradient learning, or converted from deep ANNs. Converted SNNs require long simulation times to reach reasonable accuracies while our proxy learning leads to efficient SNNs with much shorter simulation times.
翻译:我们提出一个新的学习算法,用传统的人工神经网络(ANN)作为代理来培训神经网络(SNN),用传统人造神经网络(ANN)来培训神经网络(SNN),我们将两个SNN和ANN网络(ANN)分别合并成综合与火(IF)和RELU神经网络(RLU),使用相同的网络架构和共享合成重量。这两个网络的前身是完全独立的。假设IF神经与比率编码是RELU的近似值,我们支持代理ANN的SNN错误来更新共享的重量,只是用SNNN取代了AN的最后输出。我们把拟议的代理学习应用到深革命性 SNNP(I) 和 Cifar10 (分别为94.56%和93.11%的分类精度) 两个基准数据集。提议的网络可以比其他经过同步学习、模拟梯度学习或从深层ANNW转换的深层SNNP(S) 。转换 SNNS需要很长的模拟时间才能达到合理的读数。