Spiking neural networks (SNNs) have advantages in latency and energy efficiency over traditional artificial neural networks (ANNs) due to its event-driven computation mechanism and replacement of energy-consuming weight multiplications with additions. However, in order to reach accuracy of its ANN counterpart, it usually requires long spike trains to ensure the accuracy. Traditionally, a spike train needs around one thousand time steps to approach similar accuracy as its ANN counterpart. This offsets the computation efficiency brought by SNNs because longer spike trains mean a larger number of operations and longer latency. In this paper, we propose a radix encoded SNN with ultra-short spike trains. In the new model, the spike train takes less than ten time steps. Experiments show that our method demonstrates 25X speedup and 1.1% increment on accuracy, compared with the state-of-the-art work on VGG-16 network architecture and CIFAR-10 dataset.
翻译:与传统的人工神经网络相比,螺旋神经网络(SNN)在延缓和能源效率方面有优势,因为其由事件驱动的计算机制以及用附加物取代耗能重乘法。然而,为了达到ANN对应方的准确性,通常需要长的钉钉列车才能确保准确性。传统上,顶峰列列车需要大约一千个步骤才能接近与ANN对应方相似的准确性。这抵消了SNNS带来的计算效率,因为更长的钉列列车意味着更多的操作和更长的耐久性。在本文中,我们建议用超短的钉列列车对SNN进行射线编码。在新模型中,顶峰列列列列车需要不到10个步骤。实验表明,与VGG-16网络架构和CIFAR-10数据集方面的最新工作相比,我们的计算方法显示精度加速了25X和增加1.1%。