Due to the binary spike signals making converting the traditional high-power multiply-accumulation (MAC) into a low-power accumulation (AC) available, the brain-inspired Spiking Neural Networks (SNNs) are gaining more and more attention. However, the binary spike propagation of the Full-Spike Neural Networks (FSNN) with limited time steps is prone to significant information loss. To improve performance, several state-of-the-art SNN models trained from scratch inevitably bring many non-spike operations. The non-spike operations cause additional computational consumption and may not be deployed on some neuromorphic hardware where only spike operation is allowed. To train a large-scale FSNN with high performance, this paper proposes a novel Dual-Stream Training (DST) method which adds a detachable Auxiliary Accumulation Pathway (AAP) to the full spiking residual networks. The accumulation in AAP could compensate for the information loss during the forward and backward of full spike propagation, and facilitate the training of the FSNN. In the test phase, the AAP could be removed and only the FSNN remained. This not only keeps the lower energy consumption but also makes our model easy to deploy. Moreover, for some cases where the non-spike operations are available, the APP could also be retained in test inference and improve feature discrimination by introducing a little non-spike consumption. Extensive experiments on ImageNet, DVS Gesture, and CIFAR10-DVS datasets demonstrate the effectiveness of DST.
翻译:由于二进制信号使传统高功率增殖累积(MAC)转化为低功率积累(AC),大脑启发的Spiking神经网络(SNN)越来越受到越来越多的关注,然而,全思波神经网络(FSNN)的二进制激增,其时间步骤有限,容易大量丢失信息。为了改进性能,一些从零到零经过培训的最先进的SNN模式不可避免地带来许多非间谍行动。非间谍行动造成更多的计算消耗,可能无法部署在某些只允许激增操作的神经变形硬件上。为了对大规模FSNNN进行高性能的大规模培训,本文提出了一种新型的双进制培训(DST)方法,该方法将一个可分解的辅助积累路径(AAP)添加到完全膨胀的剩余网络上。 AAP的积累可以弥补在全面膨胀前和后退期间的信息损失,并促进FSNNN的训练。 在测试阶段,AAP在不易爆的情况下,AST也可以在不易的测试阶段进行一些测试性交易, 并且只能通过SFSDFS(S) 只能保留一个小的测试案例。