Spiking Neural Networks (SNNs) provide significantly lower power dissipationthan deep neural networks (DNNs), called as analog neural networks (ANNs) inthis work. Conventionally, SNNs have failed to arrive at the training accuraciesof ANNs. However, several recent researches have shown that this challenge canbe addressed by converting ANN to SNN instead of the direct training of SNNs.Nonetheless, the large latency of SNNs still limits their application, more prob-lematic for large size datasets such as Imagenet. It is challenging to overcome thisproblem since in SNNs, there is the trade-off relation between their accuracy and la-tency. In this work, we elegantly alleviate the problem by using a trainable clippinglayers, so called TCL. By combining the TCL with traditional data-normalizationtechniques, we respectively obtain 71.12% and 73.38% (on ImageNet) for VGG-16and RESNET-34 after the ANN to SNN conversion with the latency constraint of250 cycles.
翻译:然而,最近的一些研究表明,将ANN转换为SNN而不是直接培训SNN(SNNN)可以解决这一挑战。 无一例外,SNN的宽度仍然限制其应用,对图像网等大型数据集而言,其使用率要高得多,要克服这个问题,就很难在SNN(ANN)中,其精确度和亮度之间的交易关系。在这项工作中,我们通过使用可训练剪报机(称为TCL)来缓和问题。通过将TLL与传统数据标准化技术相结合,我们分别获得了VGG-16和RESNET-34在ANNE转换至SNNON250周期时的71.12%和73.38%(图像网)用于VGG-16和RESNET-34。