As neural networks get widespread adoption in resource-constrained embedded devices, there is a growing need for low-power neural systems. Spiking Neural Networks (SNNs)are emerging to be an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs) which are known to be computationally intensive. From an application perspective, as federated learning involves multiple energy-constrained devices, there is a huge scope to leverage energy efficiency provided by SNNs. Despite its importance, there has been little attention on training SNNs on a large-scale distributed system like federated learning. In this paper, we bring SNNs to a more realistic federated learning scenario. Specifically, we propose a federated learning framework for decentralized and privacy-preserving training of SNNs. To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks. We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation while providing up to5.3x energy efficiency. In addition to efficiency, we also analyze the sensitivity of the proposed federated SNN framework to data distribution among the clients, stragglers, and gradient noise and perform a comprehensive comparison with ANNs.
翻译:随着神经网络在资源限制的嵌入装置中被广泛采用,对低能量神经系统的需求日益增长。 Spiking神经网络(SNNS)正在成为传统的人工神经网络(ANNS)的一种节能替代能源效率高的替代方案,而传统的人工神经网络(ANNS)已知是计算密集的。从应用角度看,由于联合学习涉及多种能源限制装置,因此在利用SNNS提供的节能效率方面有很大的优势。尽管这一点很重要,但对于在诸如联合学习等大规模分布的系统上对SNNS的培训却很少引起注意。在本文中,我们把SNNNS带带入一个更现实的联邦化学习方案。具体地说,我们提议为SNNNT提供一个联合学习框架,我们用CFAR10和CIFAR100基准来试验S在节能学习的各方面的优势。我们发现,SNNNNS在数据向大量联邦客户分发时,将数据与SFIS节效率进行比较。