Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous discrete signals. While more efficient in power consumption and inference speed on the neuromorphic hardware, SNNs are usually difficult to train directly from scratch with spikes due to the discreteness. As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs. Researchers have designed new SNN architectures and conversion algorithms to diminish the conversion error. However, an effective conversion should address the difference between the SNN and ANN architectures with an efficient approximation \DSK{of} the loss function, which is missing in the field. In this work, we analyze the conversion error by recursive reduction to layer-wise summation and propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms. This pipeline enables almost no accuracy loss between the converted SNNs and conventional ANNs with only $\sim1/10$ of the typical SNN simulation time. Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
翻译:Spik神经网络(SNNS)是由生物学启发的人工神经网络(ANNS)组成的人工神经网络(ANNS),由神经神经元跳动以处理非同步离散信号组成。虽然神经变形硬件的电耗和推导速度效率更高,但SNNS通常很难直接从零开始训练,因为离异性导致的峰值上升。作为一种替代办法,许多努力都致力于将常规的ANNS转换成SNNS。复制ANNS的重量,调整SNNS神经发泡的临界值。研究人员设计了新的SNNNS结构和转换算法以减少转换错误。然而,有效的转换应解决SNNN和ANNS结构之间的差异,以高效的接近=DSKQ@of。在这项工作中,我们通过重复性地将常规的减少到分层平衡和软重置机制将重量转移给目标的SNNNW。这个输油管几乎不能将SNNS的精度转化为SNNNS和S-NNS的S的内存模式转换成我们最有希望的S-NS的S-NIS的模型。