As the scales of neural networks increase, techniques that enable them to run with low computational cost and energy efficiency are required. From such demands, various efficient neural network paradigms, such as spiking neural networks (SNNs) or binary neural networks (BNNs), have been proposed. However, they have sticky drawbacks, such as degraded inference accuracy and latency. To solve these problems, we propose a single-step neural network (S$^2$NN), an energy-efficient neural network with low computational cost and high precision. The proposed S$^2$NN processes the information between hidden layers by spikes as SNNs. Nevertheless, it has no temporal dimension so that there is no latency within training and inference phases as BNNs. Thus, the proposed S$^2$NN has a lower computational cost than SNNs that require time-series processing. However, S$^2$NN cannot adopt na\"{i}ve backpropagation algorithms due to the non-differentiability nature of spikes. We deduce a suitable neuron model by reducing the surrogate gradient for multi-time step SNNs to a single-time step. We experimentally demonstrated that the obtained neuron model enables S$^2$NN to train more accurately and energy-efficiently than existing neuron models for SNNs and BNNs. We also showed that the proposed S$^2$NN could achieve comparable accuracy to full-precision networks while being highly energy-efficient.
翻译:随着神经网络规模的扩大,需要有能够以低计算成本和高能效运行这些神经网络的技术。从这些需求中,提出了各种高效神经网络模式,如神经网络或双神经网络。然而,随着神经网络规模的扩大,这些网络有粘滞的缺陷,如低推力精确度和延缓度。为了解决这些问题,我们建议建立一个单步神经网络(S$2,NNN),一个节能神经网络,低计算成本和高精确度。拟议的S$2,NNNN, 将神经网络的隐藏层之间的信息通过SNN的峰值进行处理。然而,它没有时间性,因此在培训和推导力阶段内没有延迟性。因此,拟议的S$2,NNN的计算成本比需要时间序列处理的SNNN的计算成本要低。然而,S$2, 美元,无法采用节能高效神经网络的反向反调算算法,因为不易变的模型性质。我们推算出一个适合的SNNNNNN的S-NN的神经模型,同时将S-N的S-NN的S-N的S-NS-NS-NS-NS-NS-NS-NS-NS-NS-NS-NS-S-N-N-N-N-N-N-N-N-N-N-N-N-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S