As the technology industry is moving towards implementing tasks such as natural language processing, path planning, image classification, and more on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) which operate on discrete time-series data, have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware with respect to latency, power, and energy. Our experimental results show that when compared against the Intel Neural Compute Stick 2, Intel's neuromorphic processor, Loihi, consumes up to 27x less power and 5x less energy in the tested image classification tasks by using our SNN improvement techniques.
翻译:随着技术产业正朝着执行自然语言处理、路径规划、图像分类等任务迈进,并更多地在较小边端计算设备上开展工作,对更高效地实施算法和硬件加速器的需求已成为一个重要的研究领域。近年来,释放了一些精深的学习硬件加速器,这些加速器特别侧重于减少深神经网络(DNNs)所消耗的电力和面积。另一方面,利用离散时间序列数据运作的神经网络(SNNNs)已经显示,即使上述边缘 DNNS加速器部署在专门的神经失常事件/反同步硬件上,也会大大降低上述边缘DNNNS加速器的功率。虽然神经定型硬件在加速边缘的深层学习任务方面表现出巨大的潜力,但目前的算法和硬件空间仍然有限,而且仍处于早期开发阶段。因此,提出了许多混合方法,目的是将经过预先培训的DNNNS转换成SNS。在这个工作中,我们提供了将经过训练的DNNNNS加速器转换成SNNNS加速器,同时提供技术,同时利用将SNNNNR的Sstregreal Rest显示我们Sstal Stap Statreal的能量变换为27的硬的硬。