Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data. The performance of SNNs hinges not only on selecting an apposite architecture and fine-tuning connection weights, similar to conventional ANNs, but also on the meticulous configuration of intrinsic structures within spiking computations. However, there has been a dearth of comprehensive studies examining the impact of intrinsic structures. Consequently, developers often find it challenging to apply a standardized configuration of SNNs across diverse datasets or tasks. This work delves deep into the intrinsic structures of SNNs. Initially, we unveil two pivotal components of intrinsic structures: the integration operation and firing-reset mechanism, by elucidating their influence on the expressivity of SNNs. Furthermore, we draw two key conclusions: the membrane time hyper-parameter is intimately linked to the eigenvalues of the integration operation, dictating the functional topology of spiking dynamics, and various hyper-parameters of the firing-reset mechanism govern the overall firing capacity of an SNN, mitigating the injection ratio or sampling density of input data. These findings elucidate why the efficacy of SNNs hinges heavily on the configuration of intrinsic structures and lead to a recommendation that enhancing the adaptability of these structures contributes to improving the overall performance and applicability of SNNs. Inspired by this recognition, we propose two feasible approaches to enhance SNN learning. These involve leveraging self-connection architectures and employing stochastic spiking neurons to augment the adaptability of the integration operation and firing-reset mechanism, respectively. We verify the effectiveness of the proposed methods from perspectives of theory and practice.
翻译:暂无翻译