Spiking Neural Networks (SNNs) have received considerable attention not only for their superiority in energy efficient with discrete signal processing, but also for their natural suitability to integrate multi-scale biological plasticity. However, most SNNs directly adopt the structure of the well-established DNN, rarely automatically design Neural Architecture Search (NAS) for SNNs. The neural motifs topology, modular regional structure and global cross-brain region connection of the human brain are the product of natural evolution and can serve as a perfect reference for designing brain-inspired SNN architecture. In this paper, we propose a Multi-Scale Evolutionary Neural Architecture Search (MSE-NAS) for SNN, simultaneously considering micro-, meso- and macro-scale brain topologies as the evolutionary search space. MSE-NAS evolves individual neuron operation, self-organized integration of multiple circuit motifs, and global connectivity across motifs through a brain-inspired indirect evaluation function, Representational Dissimilarity Matrices (RDMs). This training-free fitness function could greatly reduce computational consumption and NAS's time, and its task-independent property enables the searched SNNs to exhibit excellent transferbility and scalability. Extensive experiments demonstrate that the proposed algorithm achieves state-of-the-art (SOTA) performance with shorter simulation steps on static datasets (CIFAR10, CIFAR100) and neuromorphic datasets (CIFAR10-DVS and DVS128-Gesture). The thorough analysis also illustrates the significant performance improvement and consistent bio-interpretability deriving from the topological evolution at different scales and the RDMs fitness function.
翻译:脉冲神经网络(SNNs)不仅因其在离散信号处理上的能量效率优越性,而且因其在多尺度生物可塑性一体化方面的天然适用性而受到广泛关注。然而,大多数SNNs直接采用成熟的DNN结构,很少自动为SNNs设计神经体系结构搜索(NAS)。人脑的神经图案拓扑结构、模块化区域结构和全局跨脑区连接都是自然进化的产物,可作为设计模仿大脑的SNN结构的完美参考。本文提出了一种多尺度进化神经体系结构搜索(MSE-NAS)用于SNN,同时考虑微观、中观和宏观尺度的脑拓扑作为进化搜索空间。MSE-NAS进化了单个神经元操作、多个电路图案的自组织整合以及跨电路图案的全局连接,通过脑启发式间接评估函数——表征不相似矩阵(RDMs)。这种无需训练的适应度函数可以极大地减少计算消耗和NAS的时间,其任务无关的特性使搜索的SNNs具有出色的可转移性和可伸缩性。广泛的实验证明,所提出的算法在静态数据集(CIFAR10、CIFAR100)和神经形态数据集(CIFAR10-DVS和DVS 128-Gesture)上以更短的模拟步骤实现了最新技术(SOTA)性能。详细的分析也说明了在不同尺度上拓扑进化和RDMs适应度函数的显著性能改进和一致的生物可解释性派生的重要性。