Larger Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy. However, employing such models on the resource- and energy-constrained embedded platforms is inefficient. Towards this, we present a tinySNN framework that optimizes the memory and energy requirements of SNN processing in both the training and inference phases, while keeping the accuracy high. It is achieved by reducing the SNN operations, improving the learning quality, quantizing the SNN parameters, and selecting the appropriate SNN model. Furthermore, our tinySNN quantizes different SNN parameters (i.e., weights and neuron parameters) to maximize the compression while exploring different combinations of quantization schemes, precision levels, and rounding schemes to find the model that provides acceptable accuracy. The experimental results demonstrate that our tinySNN significantly reduces the memory footprint and the energy consumption of SNNs without accuracy loss as compared to the baseline network. Therefore, our tinySNN effectively compresses the given SNN model to achieve high accuracy in a memory- and energy-efficient manner, hence enabling the employment of SNNs for the resource- and energy-constrained embedded applications.
翻译:大型Spiking NealNetwork (SNN) 模型通常比较有利,因为它们可以提供更高的准确性。 但是,在资源和能源限制的嵌入平台上使用这种模型效率低下。 为此,我们提出了一个微小SNN框架,在培训和推论阶段优化SNN处理的记忆和能源需求,同时保持准确性高。通过降低SNN操作,提高学习质量,量化SNNN参数,并选择适当的 SNN模型,实现了这一目标。此外,我们的微小SNNNN量化了不同的SNN参数(即重量和神经参数),以最大限度地压缩压缩,同时探索量化方案的不同组合、精确度和圆环计划,以找到能够提供可接受的准确性模型。实验结果表明,我们微小SNNNN大大降低了S的记忆足迹和能源消耗量,但与基线网络相比没有准确性损失。 因此,我们微小SNNNN有效压缩了给SNN模型的模型,以便以高的记忆和节能方式实现高精度的精确性,从而使SNNNNS公司得以在资源和节能应用中就业。