Spiking Neural Networks (SNNs) have emerged as an attractive alternative to traditional deep learning frameworks, since they provide higher computational efficiency in event driven neuromorphic hardware. However, the state-of-the-art (SOTA) SNNs suffer from high inference latency, resulting from inefficient input encoding and training techniques. The most widely used input coding schemes, such as Poisson based rate-coding, do not leverage the temporal learning capabilities of SNNs. This paper presents a training framework for low-latency energy-efficient SNNs that uses a hybrid encoding scheme at the input layer in which the analog pixel values of an image are directly applied during the first timestep and a novel variant of spike temporal coding is used during subsequent timesteps. In particular, neurons in every hidden layer are restricted to fire at most once per image which increases activation sparsity. To train these hybrid-encoded SNNs, we propose a variant of the gradient descent based spike timing dependent back propagation (STDB) mechanism using a novel cross entropy loss function based on both the output neurons' spike time and membrane potential. The resulting SNNs have reduced latency and high activation sparsity, yielding significant improvements in computational efficiency. In particular, we evaluate our proposed training scheme on image classification tasks from CIFAR-10 and CIFAR-100 datasets on several VGG architectures. We achieve top-1 accuracy of $66.46$\% with $5$ timesteps on the CIFAR-100 dataset with ${\sim}125\times$ less compute energy than an equivalent standard ANN. Additionally, our proposed SNN performs $5$-$300\times$ faster inference compared to other state-of-the-art rate or temporally coded SNN models.
翻译:Spik Neural 网络(SNNS) 已经成为传统深度学习框架的一个有吸引力的替代方案,因为300美元是一个低纬度节能 SNNS(SNNS), 因为它在由神经形态驱动的神经形态硬件中提供了更高的计算效率。 然而, 最先进的SNNS(SOTA) 却由于输入编码和培训技术效率低下而出现高推力。 最广泛使用的输入编码方案, 如Poisson基于速度编码的SNNNS, 无法利用这些混合编码SNNNS的学习能力。 本文为低纬度节能 SNDB(SDB) 提供了一个培训框架, 在输入层中使用一种混合的混合编码机制,在第一个时间步骤中直接应用图像的模拟像素值等值, 在随后的时间步骤中直接应用一种新型的加速时间编码。 SNFRER(S) 运行一个特殊的SNDRER(S) IM) 数据模型, 在运行中, 快速的SNGRER(S-NGR) IM(S) IM(S) IM) IM(S) IM(S) IM) 时间和(SDRD) 格式(SD) (S) (S) (S) (SD) (SD) (S) (S) (S) (S) (S) (S) IM) (S) (S) (S) (S) (SD) (S) IM) (SDL) (SD) (SD) (S) (S) (S) (的计算中, AS) (S) (S) (S) (S) (S) (S) (S) (S) (S) (S) ) ) ) ) (S) (S) (S) (SD) (SD) (S) (SD) (SD) (SD) (SD) (SD) (SD) (SD) (SD) ) ) (S) (S) (S) (S) (S) (S) (S) (S) (S) (S) (的) (中, AS) (的