We recently proposed the STiDi-BP algorithm, which avoids backward recursive gradient computation, for training multi-layer spiking neural networks (SNNs) with single-spike-based temporal coding. The algorithm employs a linear approximation to compute the derivative of the spike latency with respect to the membrane potential and it uses spiking neurons with piecewise linear postsynaptic potential to reduce the computational cost and the complexity of neural processing. In this paper, we extend the STiDi-BP algorithm to employ it in deeper and convolutional architectures. The evaluation results on the image classification task based on two popular benchmarks, MNIST and Fashion-MNIST datasets with the accuracies of respectively 99.2% and 92.8%, confirm that this algorithm has been applicable in deep SNNs. Another issue we consider is the reduction of memory storage and computational cost. To do so, we consider a convolutional SNN (CSNN) with two sets of weights: real-valued weights that are updated in the backward pass and their signs, binary weights, that are employed in the feedforward process. We evaluate the binary CSNN on two datasets of MNIST and Fashion-MNIST and obtain acceptable performance with a negligible accuracy drop with respect to real-valued weights (about $0.6%$ and $0.8%$ drops, respectively).
翻译:我们最近建议采用Stidi-BP算法,避免后向递归递归梯度计算,用于培训具有单点时间编码的多层神经网络(SNN),该算法使用线性近似法计算顶部悬浮的衍生物,相对于膜膜潜力而言,该算法使用Stidi-BP算法,并使用具有小巧线性线性后加配法的潜力,以减少计算成本和神经处理的复杂程度。在本文中,我们扩大STidi-BP算法,将其用于更深层和进化结构中。基于两个流行基准(MNIST和时装-MNIS)的图像分类任务的评价结果,分别以99.2%和92.8%的折叠数计算率计算。我们考虑的另一个问题是减少记忆存储和计算成本。为了这样做,我们考虑的是具有两套重量的相累进制的SNNN(CNN(CNN),即实际估价重量:在后流路更新的重量,其标记,以及其符号为硬值的精度为硬度,我们用的是IMS的精度,我们用的是硬值的精度和可接受的硬值数据。