Sparse and event-driven spiking neural network (SNN) algorithms are the ideal candidate solution for energy-efficient edge computing. Yet, with the growing complexity of SNN algorithms, it isn't easy to properly benchmark and optimize their computational cost without hardware in the loop. Although digital neuromorphic processors have been widely adopted to benchmark SNN algorithms, their black-box nature is problematic for algorithm-hardware co-optimization. In this work, we open the black box of the digital neuromorphic processor for algorithm designers by presenting the neuron processing instruction set and detailed energy consumption of the SENeCA neuromorphic architecture. For convenient benchmarking and optimization, we provide the energy cost of the essential neuromorphic components in SENeCA, including neuron models and learning rules. Moreover, we exploit the SENeCA's hierarchical memory and exhibit an advantage over existing neuromorphic processors. We show the energy efficiency of SNN algorithms for video processing and online learning, and demonstrate the potential of our work for optimizing algorithm designs. Overall, we present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms and pave the way towards effective algorithm-hardware co-design.
翻译:稀疏和事件驱动的脉冲神经网络(SNN)算法是能够进行能效边缘计算的理想候选解决方案。然而,由于SNN算法的复杂度不断增加,没有硬件支持很难正确地评估和优化它们的计算成本。尽管数字类神经形态处理器已经被广泛采用来评估SNN算法,但它们的黑盒特性对算法与硬件的协同优化带来问题。在这项工作中,我们通过展示SENCA数字类神经形态处理器的神经元处理指令集和详细的能量消耗来为算法设计者开启数字类神经形态处理器的黑盒。为了方便基准测试和优化,我们提供了SENCA中的关键神经形态组件的能量成本,包括神经元模型和学习规则。此外,我们利用了SENCA的分层存储,并展示了它相对于现有神经形态处理器的优势。我们展示了适用于视频处理和在线学习的SNN算法的能效,并展示了我们工作在优化算法设计方面的潜力。总的来说,我们提出了一种实用的方法,以使算法设计者能够准确地基准测试SNN算法,并为有效的算法硬件协同设计铺平道路。