Federated learning (FL) is an emerging promising privacy-preserving machine learning paradigm and has raised more and more attention from researchers and developers. FL keeps users' private data on devices and exchanges the gradients of local models to cooperatively train a shared Deep Learning (DL) model on central custodians. However, the security and fault tolerance of FL have been increasingly discussed, because its central custodian mechanism or star-shaped architecture can be vulnerable to malicious attacks or software failures. To address these problems, Swarm Learning (SL) introduces a permissioned blockchain to securely onboard members and dynamically elect the leader, which allows performing DL in an extremely decentralized manner. Compared with tremendous attention to SL, there are few empirical studies on SL or blockchain-based decentralized FL, which provide comprehensive knowledge of best practices and precautions of deploying SL in real-world scenarios. Therefore, we conduct the first comprehensive study of SL to date, to fill the knowledge gap between SL deployment and developers, as far as we are concerned. In this paper, we conduct various experiments on 3 public datasets of 5 research questions, present interesting findings, quantitatively analyze the reasons behind these findings, and provide developers and researchers with practical suggestions. The findings have evidenced that SL is supposed to be suitable for most application scenarios, no matter whether the dataset is balanced, polluted, or biased over irrelevant features.
翻译:为了解决这些问题,Swarm Learning(Smarm Learning(SL)引入了一个允许的连锁链以安全方式登机和动态选举领导人,从而能够以极为分散的方式履行DL。与对SL的极大关注相比,很少有关于SL或基于链的分散式FL的经验性研究,这些研究提供了在现实世界情景中部署SL的最佳做法和防范性的全面知识。因此,我们迄今为止对SL的首次全面研究,以填补SL部署和开发者之间的知识差距。在本文件中,我们对5个研究问题的公开数据集进行各种实验,提出令人感兴趣的研究结果,量化分析这些结果背后的不准确性,而研究者们认为这些结果背后的不确切性是真实的。