Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, the supervised training of SNNs remains a hard problem due to the discontinuity of the spiking neuron model. Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks, and use surrogate derivatives or compute gradients with respect to the spiking time to deal with the problem. These approaches either accumulate approximation errors or only propagate information limitedly through existing spikes, and usually require information propagation along time steps with large memory costs and biological implausibility. In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation. First, we show that the average firing rates of SNNs with feedback connections would gradually evolve to an equilibrium state along time, which follows a fixed-point equation. Then by viewing the forward computation of feedback SNNs as a black-box solver for this equation, and leveraging the implicit differentiation on the equation, we can compute the gradient for parameters without considering the exact forward procedure. In this way, the forward and backward procedures are decoupled and therefore the problem of non-differentiable spiking functions is avoided. We also briefly discuss the biological plausibility of implicit differentiation, which only requires computing another equilibrium. Extensive experiments on MNIST, Fashion-MNIST, N-MNIST, CIFAR-10, and CIFAR-100 demonstrate the superior performance of our method for feedback models with fewer neurons and parameters in a small number of time steps. Our code is avaiable at https://github.com/pkuxmq/IDE-FSNN.
翻译:螺旋神经网络(SNNS)是大脑启发型模型,能够对神经变异硬件进行节能执行。然而,由于神经神经模型的不连续性,对SNNS的监管培训仍是一个棘手的问题。大多数现有方法模仿人工神经网络的后向再造框架和饲料向前结构,并使用代谢衍生物或计算梯度来应对问题。这些方法要么积累近似误差,要么仅通过现有参数有限地传播信息,通常需要随时间步骤传播信息,同时提供大量记忆成本和生物不透明性的信息。在这项工作中,我们考虑反馈涌现神经网络的反馈,而这些神经网络更像大脑,并提出一种新的培训方法,而并不依赖前向计算的确切反方向。首先,我们表明,带有反馈连接的SNNNNWs的平均发射速率将逐渐演变为平衡状态,仅遵循固定点的等值。然后,通过将SNNNFS的远端计算视为这个等式的黑箱解决方案的解决方案,我们也可以在前向前方计算中用隐含的变的变法方法来计算。