Spiking neural networks (SNN) have recently emerged as alternatives to traditional neural networks, owing to energy efficiency benefits and capacity to better capture biological neuronal mechanisms. However, the classic backpropagation algorithm for training traditional networks has been notoriously difficult to apply to SNN due to the hard-thresholding and discontinuities at spike times. Therefore, a large majority of prior work believes exact gradients for SNN w.r.t. their weights do not exist and has focused on approximation methods to produce surrogate gradients. In this paper, (1) by applying the implicit function theorem to SNN at the discrete spike times, we prove that, albeit being non-differentiable in time, SNNs have well-defined gradients w.r.t. their weights, and (2) we propose a novel training algorithm, called \emph{forward propagation} (FP), that computes exact gradients for SNN. FP exploits the causality structure between the spikes and allows us to parallelize computation forward in time. It can be used with other algorithms that simulate the forward pass, and it also provides insights on why other related algorithms such as Hebbian learning and also recently-proposed surrogate gradient methods may perform well.
翻译:由于节能效益和更好地捕捉生物神经机制的能力,最近出现了传统神经网络的替代物。然而,传统传统传统传统传统神经网络的反向调整算法(SNN)最近成为传统神经网络(SNN)的替代物。然而,传统的传统传统传统传统网络培训反向调整算法(SNN)由于在峰值出现时坚硬的保持和不连续,因此很难适用于SNNN。因此,大多数以前的工作都认为SNNN(w.r.r.t)的精确梯度是准确的。它们的重量并不存在,侧重于产生替代梯度的近似方法。在本文中,(1) 通过在离散的峰值时将隐含的函数代号应用到SNNN(S),我们证明SNNN尽管在时间上是不可区分的,但SNNN(S)的典型的反向调整算法是众所周知的梯度(w.r.t.),但由于SNNNN(F)的重量和精确梯度不存在,因此可以对SNNN(SNN)进行精确的梯度结构进行精确的计算。F在时间上进行平行的计算。我们可以用其他算法来模拟前向前向前向前向的计算。它可以使用其他算法,它也可以用来用来模拟前向前向前向前向的推,也提供其他的推的推。</s>