Dendritic computation endows biological neurons with rich nonlinear integration and high representational capacity, yet it is largely missing in existing deep spiking neural networks (SNNs). Although detailed multi-compartment models can capture dendritic computations, their high computational cost and limited flexibility make them impractical for deep learning. To combine the advantages of dendritic computation and deep network architectures for a powerful, flexible and efficient computational model, we propose the dendritic spiking neuron (DendSN). DendSN explicitly models dendritic morphology and nonlinear integration in a streamlined design, leading to substantially higher expressivity than point neurons and wide compatibility with modern deep SNN architectures. Leveraging the efficient formulation and high-performance Triton kernels, dendritic SNNs (DendSNNs) can be efficiently trained and easily scaled to deeper networks. Experiments show that DendSNNs consistently outperform conventional SNNs on classification tasks. Furthermore, inspired by dendritic modulation and synaptic clustering, we introduce the dendritic branch gating (DBG) algorithm for task-incremental learning, which effectively reduces inter-task interference. Additional evaluations show that DendSNNs exhibit superior robustness to noise and adversarial attacks, along with improved generalization in few-shot learning scenarios. Our work firstly demonstrates the possibility of training deep SNNs with multiple nonlinear dendritic branches, and comprehensively analyzes the impact of dendrite computation on representation learning across various machine learning settings, thereby offering a fresh perspective on advancing SNN design.
翻译:树突计算赋予生物神经元丰富的非线性整合能力和高表征容量,然而现有深度脉冲神经网络(SNN)普遍缺乏这一特性。尽管详细的多室模型能够捕捉树突计算,但其高计算成本和有限灵活性使其难以适用于深度学习。为结合树突计算与深度网络架构的优势,构建强大、灵活且高效的计算模型,我们提出树突脉冲神经元(DendSN)。DendSN通过流线型设计显式建模树突形态和非线性整合,相比点神经元显著提升了表达力,并与现代深度SNN架构广泛兼容。借助高效公式化实现和高性能Triton内核,树突脉冲神经网络(DendSNN)能够高效训练并轻松扩展至更深层网络。实验表明,DendSNN在分类任务上持续优于传统SNN。此外,受树突调控与突触聚类机制启发,我们提出用于任务增量学习的树突分支门控(DBG)算法,能有效降低任务间干扰。进一步评估显示,DendSNN对噪声和对抗攻击表现出卓越鲁棒性,并在少样本学习场景中展现出更好的泛化能力。本研究首次论证了训练具有多个非线性树突分支的深度SNN的可能性,并全面分析了树突计算在各种机器学习场景中对表征学习的影响,从而为推进SNN设计提供了全新视角。