Spiking neural networks have attracted extensive attention from researchers in many fields due to their brain-like information processing mechanism. The proposal of surrogate gradient enables the spiking neural networks to migrate to more complex tasks, and gradually close the gap with the conventional artificial neural networks. Current spiking neural networks utilize the output of all moments to produce the final prediction, which compromises their temporal characteristics and causes a reduction in performance and efficiency. We propose a temporal knowledge sharing approach (TKS) that enables the interaction of information between different moments, by selecting the output of specific moments to compose teacher signals to guide the training of the network along with the real labels. We have validated TKS on both static datasets CIFAR10, CIFAR100, ImageNet-1k and neuromorphic datasets DVS-CIFAR10, NCALTECH101. Our experimental results indicate that we have achieved the current optimal performance in comparison with other algorithms. Experiments on Fine-grained classification datasets further demonstrate our algorithm's superiority with CUB-200-2011, StanfordDogs, and StanfordCars. TKS algorithm helps the model to have stronger temporal generalization capability, allowing the network to guarantee performance with large time steps in the training phase and with small time steps in the testing phase. This greatly facilitates the deployment of SNNs on edge devices.
翻译:突触神经网络吸引了许多领域的研究人员的广泛关注,因为它们具有类似于大脑的信息处理机制。替代梯度的提出使得突触神经网络能够迁移到更复杂的任务,并逐渐缩小与传统人工神经网络之间的差距。当前的突触神经网络利用所有时刻的输出来生成最终的预测,这会损害它们的时间特性,导致性能和效率降低。我们提出了一种暂态知识共享方法(TKS),它通过选择特定时刻的输出来组成教师信号,指导网络的训练以及真实标签。我们已经在静态数据集CIFAR10、CIFAR100、ImageNet-1k和神经形态数据集DVS-CIFAR10、NCALTECH101上验证了TKS。我们的实验结果表明,与其他算法相比,我们已经实现了当前的最佳性能。在细粒度分类数据集上的实验证明了我们算法在CUB-200-2011、StanfordDogs和StanfordCars上的优越性。TKS算法有助于模型具有更强的暂态普适性,使网络在训练阶段能够保证大的时间步长,在测试阶段能够保证小的时间步长。这极大地促进了SNNs在边缘设备上的部署。