Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing and the closer resemblance of biological processes in the nervous system of humans. However, SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models, which offsets efficiency and inhibits its application to low-power systems for real-world use cases. To alleviate this problem, emerging neural encoding schemes are proposed to shorten the spike train while maintaining the high accuracy. However, current accelerators for SNN cannot well support the emerging encoding schemes. In this work, we present a novel hardware architecture that can efficiently support SNN with emerging neural encoding. Our implementation features energy and area efficient processing units with increased parallelism and reduced memory accesses. We verified the accelerator on FPGA and achieve 25% and 90% improvement over previous work in power consumption and latency, respectively. At the same time, high area efficiency allows us to scale for large neural network models. To the best of our knowledge, this is the first work to deploy the large neural network model VGG on physical FPGA-based neuromorphic hardware.
翻译:最近,由于低功率无倍增计算和人类神经系统生物过程的近似性,Spik神经网络(SNNs)最近获得了动力。然而,SNNs需要非常长的加注列车(高达1000)才能达到类似于大型模型的人工神经网络(ANN)的精确度,这抵消了效率,并抑制了在现实世界使用案例中对低功率系统的应用。为了缓解这一问题,提出了新兴的神经编码计划,以缩短螺旋列车,同时保持高精确度。然而,SNN目前的加速器无法很好地支持正在形成的编码计划。在这项工作中,我们展示了能够以新兴神经编码有效支持SNNN的新硬件结构。我们的实施以能源和区域高效处理装置为特征,增加了平行性和记忆存取量的减少。我们核实了FPGA的加速器,并分别实现了25%和90%的电力消耗和耐久度前工作改进率。与此同时,高区域效率使我们得以扩大大型神经网络模型的规模。我们最了解的是,这是部署大型神经网络的硬件模型。