Spiking neural networks (SNN) are a viable alternative to conventional artificial neural networks when energy efficiency and computational complexity are of importance. A major advantage of SNNs is their binary information transfer through spike trains. The training of SNN has, however, been a challenge, since neuron models are non-differentiable and traditional gradient-based backpropagation algorithms cannot be applied directly. Furthermore, spike-timing-dependent plasticity (STDP), albeit being a spike-based learning rule, updates weights locally and does not optimize for the output error of the network. We present desire backpropagation, a method to derive the desired spike activity of neurons from the output error. The loss function can then be evaluated locally for every neuron. Incorporating the desire values into the STDP weight update leads to global error minimization and increasing classification accuracy. At the same time, the neuron dynamics and computational efficiency of STDP are maintained, making it a spike-based supervised learning rule. We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively. Furthermore, we show that desire backpropagation is computationally less complex than backpropagation in traditional neural networks.
翻译:在节能和计算复杂度具有重要性的情况下,Spik神经网络(SNN)是常规人工神经网络的可行替代物。SNN的主要优势是其二进信息通过加注列传输。然而,SNN的培训是一项挑战,因为神经模型是非差异的,传统的梯度反反向回向回向回向演算法无法直接应用。此外,STDA的神经动力和计算效率尽管是一种基于加压的学习规则,但还是能在当地更新重量,并且不能优化网络输出错误。我们提出反向调整,这是从输出错误中得出神经元所期望的峰值活动的一种方法。然后可以对每个神经元进行本地评估。将渴望值纳入STDP重量更新导致全球误差最小化和增加分类准确性。与此同时,STDP的神经动态和计算效率得到维持,使其成为基于加压的监管学习规则。我们训练了三层网络对MNIST和FASimon-MNIST图像进行分类,并达到98.41%和87.56%的精确性中程网络。此外,我们还显示的是,在后向后方计算中,我们显示的是低于正态网络。