Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience. Moreover, compared to classic artificial neural networks, they could serve as an energy-efficient deployment of AI by enabling fast computation in specialized neuromorphic hardware. However, training deep spiking neural networks, especially in an unsupervised manner, is challenging and the performance of a spiking model is significantly hindered by dead or bursting neurons. Here, we apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder with multiple trainable layers of leaky integrate-and-fire neurons. We propose bio-inspired regularization methods to control the spike density in latent representations. In the experiments, we show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons and significantly decreases the reconstruction error of the spiking auto-encoder. Training regularized networks on the MNIST dataset yields image reconstruction quality comparable to non-spiking baseline models (deterministic and variational auto-encoder) and indicates improvement upon earlier approaches. Importantly, we show that, unlike the variational auto-encoder, the spiking latent representations display structure associated with the image class.
翻译:在计算神经科学中,对大脑下一代模型而言,Spik神经网络是一个很有希望的方法。此外,与典型的人工神经网络相比,它们可以通过在专门神经形态硬件中快速计算,成为对AI的一种节能部署。然而,对深度神经网络进行深度喷射培训,尤其是以不受监督的方式进行这种培训,具有挑战性,而喷射模型的性能受到死或爆裂神经元的极大阻碍。在这里,我们用薄膜的潜在反向反演法进行端到端的学习。对带有多层可受训练的渗漏综合与火灾神经元的螺旋自动编码系统进行常规化改造。我们提出了生物刺激的正规化方法,以控制潜伏图示中的峰值密度。在实验中,我们表明,对膜潜力和喷射输出的功能进行正规化,既避免死亡,也避免爆发神经元,并显著减少螺旋式自动电解的重建错误。在MNIST数据库中进行正规化网络的培训,其图像重建质量与非跳动的基线模型相似(确定性模型,不透明地显示前的自动变形结构)。