Spiking Neural Networks (SNNs) are efficient computation models to perform spatio-temporal pattern recognition on {resource}- and {power}-constrained platforms. SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms. With increasing model size and complexity, mapping SNN-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is attributed to the limitations of neuro-synaptic cores, viz. a crossbar, to accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN-based models that have many neurons and pre-synaptic connections per neuron, (1) connections may need to be pruned after training to fit onto the crossbar resources, leading to a loss in model quality, e.g., accuracy, and (2) the neurons and synapses need to be partitioned and placed on the neuro-sypatic cores of the hardware, which could lead to increased latency and energy consumption. In this work, we propose (1) a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units to significantly improve the crossbar utilization and retain all pre-synaptic connections, and (2) SpiNeMap, a novel methodology to map SNNs on neuromorphic hardware with an aim to minimize energy consumption and spike latency.
翻译:螺旋神经网络(SNN)是高效的计算模型,用于在{资源}和{电力}受限制的平台上进行神经同步模式识别。在神经畸形硬件上执行的SNN可以进一步减少这些平台的能源消耗。随着模型规模和复杂性的提高,基于SNN的应用程序对基于瓷质的神经畸形硬件的测绘正变得越来越具有挑战性。这是因为神经合成核心(即十字路口)的局限性,只能容纳固定数量的神经同步前神经同步连接。对于基于SNNN的复杂模型,这些模型有许多神经元和每个神经元的合成前连接,(1) 在培训后可能需要将SNNNN应用到跨条资源,导致模型质量、如前、准确性以及(2)神经元和神经神经神经元核心的偏移需要最小化,并放置在神经-合成核心上,这可能导致更高的悬浮度和能量消耗量。 在这项工作中,我们建议与神经神经神经同步连接的多个新流程连接,并大大改进了神经同步序列的序列。