Recently emerged federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous. Compared with the traditional machine learning framework that collects user data for centralized storage, which brings huge communication burden and concerns about data privacy, this approach can not only save the network bandwidth but also protect the data privacy. Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well. In this paper, we conduct a comprehensive investigation of the state-of-the-art strategies for defending against byzantine attacks in FL. We first provide a taxonomy for the existing defense solutions according to the techniques they used, followed by an across-the-board comparison and discussion. Then we propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat. The results show that existing defense solutions, although abundant, are still far from fully protecting FL. Finally, we indicate possible countermeasures for weight attack, and highlight several challenges and future research directions for mitigating byzantine attacks in FL.
翻译:最近出现的联邦学习(FL)是一个有吸引力的分布式学习框架,许多无线终端用户设备可以在这个框架中用数据来训练一个全球模型,这种模型仍然是土生土长的。与传统的机器学习框架相比,这一方法不仅可以拯救网络带宽,而且可以保护数据隐私。尽管在常规分布式网络中,以赞丁攻击是一种棘手的威胁,这一前景大有希望,但发现对FL也是相当有效的。在这份文件中,我们全面调查了FL中最先进的防赞廷攻击的防御战略。我们首先根据它们使用的技术为现有的防御解决方案提供一种分类方法,随后进行跨板比较和讨论。然后我们提出一个新的由赞廷攻击法式攻击方法,称为重量攻击以击败这些防御计划,并进行实验以证明其威胁。结果显示,现有的防御办法虽然丰富,但远未充分保护FL。最后,我们指出重攻击可能采取的反措施,并着重指出FZANIN攻击中的若干挑战和未来研究方向。