Despite the rapid progress of neuromorphic computing, the inadequate depth and the resulting insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice. Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks, but rarely did previous work assess their applicability to the characteristics of spike-based communication and spatiotemporal dynamics. This negligence leads to impeded information flow and the accompanying degradation problem. In this paper, we identify the crux and then propose a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs, e.g., up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem. We validate the effectiveness of our methods on both frame-based and neuromorphic datasets, and our SRM-ResNet104 achieves a superior result of 76.02% accuracy on ImageNet, the first time in the domain of directly trained SNNs. The great energy efficiency is estimated and the resulting networks need on average only one spike per neuron for classifying an input sample. We believe our powerful and scalable modeling will provide a strong support for further exploration of SNNs.
翻译:尽管神经变形计算取得了快速进展,但是神经神经网络的深度不足,因此造成神经网络的代表性力量不足,从而严重限制了其实际应用范围。残留学习和捷径被证明是培训深神经网络的一个重要方法,但以往的工作很少评估其适用于基于悬浮的通信和神经形态动态的特点。这种疏忽导致信息流动受阻,并随之产生退化问题。在本文件中,我们确定了SNN的轮廓,然后提议为SNN提供一个新的剩余区块,以便能够大大扩大直接培训的SNN的深度,例如,在CIFAR-10和图像网络104层上,高达482层,而没有观察到任何轻微的退化问题。我们验证了我们在基于框架和神经形态的数据集上的方法的有效性,我们的SRM-ResNet104实现了图像网络76.02 %的优异性结果,这是直接培训的SNNN的首次领域。估计了巨大的能源效率,因此产生的网络平均只需要每神经网络一次强大的加固,才能对投入样本进行分类。 我们相信,我们的SNNNN的强大和可测量模型将进一步提供支持。