A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization. However, the state-of-the-art only focus on employing the weight quantization directly from a specific quantization scheme, i.e., either the post-training quantization (PTQ) or the in-training quantization (ITQ), and do not consider (1) quantizing other SNN parameters (e.g., neuron membrane potential), (2) exploring different combinations of quantization approaches (i.e., quantization schemes, precision levels, and rounding schemes), and (3) selecting the SNN model with a good memory-accuracy trade-off at the end. Therefore, the memory saving offered by these state-of-the-art to meet the targeted accuracy is limited, thereby hindering processing SNNs on the resource-constrained systems (e.g., the IoT-Edge devices). Towards this, we propose Q-SpiNN, a novel quantization framework for memory-efficient SNNs. The key mechanisms of the Q-SpiNN are: (1) employing quantization for different SNN parameters based on their significance to the accuracy, (2) exploring different combinations of quantization schemes, precision levels, and rounding schemes to find efficient SNN model candidates, and (3) developing an algorithm that quantifies the benefit of the memory-accuracy trade-off obtained by the candidates, and selects the Pareto-optimal one. The experimental results show that, for the unsupervised network, the Q-SpiNN reduces the memory footprint by ca. 4x, while maintaining the accuracy within 1% from the baseline on the MNIST dataset. For the supervised network, the Q-SpiNN reduces the memory by ca. 2x, while keeping the accuracy within 2% from the baseline on the DVS-Gesture dataset.
翻译:降低 Spiking Neal 网络(SNN) 记忆足迹而不显著降低精度的显著技术是量化的。然而,最先进的技术仅侧重于直接从特定的量化办法中,即培训后量化(PTQ)或培训中量化(ITQ)中,使用重量量化(Spiking Neal 网络),而不考虑(1) 量化其他SNN参数(例如神经膜潜力),(2) 探索量化办法的不同组合(即,量化办法、精度准确度和圆数化办法),(3) 选择SNNW模型,在最后采用良好的内存储-准确度交易办法,即培训后量化(PTQ) 或培训中量化办法,从而妨碍处理资源紧张系统(例如,IOT-Edge设备)上的SNNNNN参数。为此,我们提议在“S-pinnational”、精度度度(crecial-decial)框架,在S NNIS 数据库中将S 的精度化的精度值化后,在S-deal-deal-deal-deal-deal-deal Qal-al-al-deal-deal Qal Qal-deal Qal Qal 上,在Sal-deal-deal-deal Q ial ial ial 上,在S disal ial 上,在S disal-deal-deal-de Qal-deal-deal 上,在S disal Q Q 上,在S 和S disal-deal-al-al-deal-deal-deal-de 上,在Sl-deal-deal-deal-deal-deal-deal Qal Qal Qal Qal 上,在S disalisal-deal-deal-deal-demental-deal 上,在S dal-deal Qal Q Qal Qal-deal-deal-deal-deal-deal-deal Qal Q Q Q Q Q) 上