Spiking Neural Networks (SNNs) have attracted considerable attention due to their biologically inspired, event-driven nature, making them highly suitable for neuromorphic hardware. Time-to-First-Spike (TTFS) coding, where neurons fire only once during inference, offers the benefits of reduced spike counts, enhanced energy efficiency, and faster processing. However, SNNs employing TTFS coding often suffer from diminished classification accuracy. This paper presents an efficient training framework for TTFS that not only improves accuracy but also accelerates the training process. Unlike most previous approaches, we first identify two key issues limiting the performance of TTFS neurons: information disminishing and imbalanced membrane potential distribution. To address these challenges, we propose a novel initialization strategy. Additionally, we introduce a temporal weighting decoding method that aggregates temporal outputs through a weighted sum, supporting BPTT. Moreover, we re-evaluate the pooling layer in TTFS neurons and find that average pooling is better suited than max-pooling for this coding scheme. Our experimental results show that the proposed training framework leads to more stable training and significant performance improvements, achieving state-of-the-art (SOTA) results on both the MNIST and Fashion-MNIST datasets.
翻译:暂无翻译