Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire (LIF) neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale's law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.
翻译:Spik 神经网络(SNNS)是大脑中低功率、容错信息处理的基础,如果在适当的神经形态硬体加速器上实施,它可以成为传统深神经网络的一种节能替代物。然而,在大脑中常见的由波动驱动的系统激励下,为SNNS制定总体初始化战略,解决硅内部复杂的计算任务,这仍然是一项重大挑战。代谢性梯度(SG)技术已成为培训SNNS最终到终端的一个标准解决方案。然而,其成功取决于合成重量初始化,类似于常规的人工神经网络(ANNS ) 。然而,与ANNNS的情况不同,它仍然难以成为常规深度神经网络的良好初始状态。在这里,我们为SNNNNS制定了一个受大脑常见的由波动驱动的系统启发的通用初始化战略。具体地说,我们为基于数据的权重初始化找到了切实可行的解决方案,以确保在广泛使用的漏流综合和火灾神经系统(LIF)神经元中,我们的经验表明,SNNNS在战略初始化后,在接受与SG公司培训时表现出优异的学习成绩。这些结论,在一系列的经常的不断变化中可以提供正常的S-S-S-rodustrual-rodustral-rodu化的系统上,这些结论,为整个的系统。