Spiking Neural Networks (SNNs) are biologically inspired alternatives to conventional Artificial Neural Networks (ANNs). Despite promising preliminary results, the trade-offs in the training of SNNs in a distributed scheme are not well understood. Here, we consider SNNs in a federated learning setting where a high-quality global model is created by aggregating multiple local models from the clients without sharing any data. We investigate federated learning for training multiple SNNs at clients when two mechanisms reduce the uplink communication cost: i) random masking of the model updates sent from the clients to the server; and ii) client dropouts where some clients do not send their updates to the server. We evaluated the performance of the SNNs using a subset of the Spiking Heidelberg digits (SHD) dataset. The results show that a trade-off between the random masking and the client drop probabilities is crucial to obtain a satisfactory performance for a fixed number of clients.
翻译:Spik Neural Networks(SNN)是传统人工神经网络(ANNs)的生物激励型替代物。 尽管初步结果大有希望,但对于在分布式计划中培训SNS的权衡却并不十分清楚。在这里,我们认为SNNs是在一个联合学习环境中进行的,通过将客户的多种本地模型汇集在一起,而不分享任何数据,建立了高质量的全球模型。当两个机制降低了通信成本时,我们调查了在客户中培训多个SNNS的联邦学习:一) 随机掩盖客户发送给服务器的更新模型;二) 客户辍学,而有些客户不向服务器发送更新版本。我们使用Spiking Heiderberg 数字数据集来评估SNNS的绩效。结果显示,随机遮盖与客户下降概率之间的交易对于固定客户的满意性能至关重要。</s>