Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.
翻译:Spik 神经网络将模拟计算与使用离散峰值的基于事件的通信结合起来。 深层次学习的令人印象深刻的进步是通过使用回向剖析算法对非喷射的人工神经网络进行培训而得以实现的, 将这一算法应用到spik 网络之前曾因离散峰值事件和不连续而受阻。 这项工作首次产生连续跳动神经网络和一般损失函数的后向剖析算法, 其方法是与适当的部分衍生物跳跃同时应用连接法, 允许通过不贴近的离散峰值事件进行反向剖析。 此算法、 事件Prop、 在峰值时反向剖析错误, 以便在基于事件、 时间 和 空间 分散的方式计算准确的梯度。 我们使用“ 事件” 计算梯度, 来训练日- 扬 和 MNIST 数据集的网络, 使用加压时间或电压损失函数, 并报告竞争性的运行情况。 我们的工作支持对基于梯度的神经网络的梯度学习算法进行严格的研究, 并在新型脑动硬件中提供对梯度应用应用的硬件的运用。