In this paper, we present an energy-efficient SNN architecture, which can seamlessly run deep spiking neural networks (SNNs) with improved accuracy. First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead. In the proposed CAT, the activation function developed for simulating SNN during ANN training, is efficiently exploited to reduce the data representation error after conversion. Based on the CAT technique, we also present a time-to-first-spike coding that allows lightweight logarithmic computation by utilizing spike time information. The SNN processor design that supports the proposed techniques has been implemented using 28nm CMOS process. The processor achieves the top-1 accuracies of 91.7%, 67.9% and 57.4% with inference energy of 486.7uJ, 503.6uJ, and 1426uJ to process CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, when running VGG-16 with 5bit logarithmic weights.
翻译:在本文中,我们提出了一个节能的 SNN 结构,它可以无缝地运行深度喷射神经网络(SNN),其准确性会提高。首先,我们建议进行转换意识培训(CAT),以减少无硬件实施间接费用的ANN至SNN转换损失。在拟议的CAT中,为模拟ANN培训期间开发的激活功能被有效利用,以减少转换后的数据表述错误。根据CAT技术,我们还提出了一个时间到第一次的编码,以便利用峰值时间信息进行轻量的对数计算。支持拟议技术的 SNN 处理器设计已经使用28nm CMOS 进程实施。当使用5位对数重量运行 VG-16时,该处理器的顶层1弧值分别为91.7%、67.9%和57.4%,推断能量为486.7uJ、503.6uJ和1426uJ,处理CIFAR-10、CIFAR-100和Tiny-ImageNet。