Federated learning (FL) is a distributed machine learning approach where multiple clients collaboratively train a joint model without exchanging their data. Despite FL's unprecedented success in data privacy-preserving, its vulnerability to free-rider attacks has attracted increasing attention. Existing defenses may be ineffective against highly camouflaged or high percentages of free riders. To address these challenges, we reconsider the defense from a novel perspective, i.e., model weight evolving frequency.Empirically, we gain a novel insight that during the FL's training, the model weight evolving frequency of free-riders and that of benign clients are significantly different. Inspired by this insight, we propose a novel defense method based on the model Weight Evolving Frequency, referred to as WEF-Defense.Specifically, we first collect the weight evolving frequency (defined as WEF-Matrix) during local training. For each client, it uploads the local model's WEF-Matrix to the server together with its model weight for each iteration. The server then separates free-riders from benign clients based on the difference in the WEF-Matrix. Finally, the server uses a personalized approach to provide different global models for corresponding clients. Comprehensive experiments conducted on five datasets and five models demonstrate that WEF-Defense achieves better defense effectiveness than the state-of-the-art baselines.
翻译:联邦学习( FL) 是一种分散式的机器学习方法, 由多个客户在不交换数据的情况下合作培训一个联合模型。 尽管 FL在数据隐私保护方面取得了前所未有的成功, 但它对自由驾驶者袭击的脆弱性引起了越来越多的关注。 现有的防御对高度伪装或高比例的自由驾驶者可能无效。 为了应对这些挑战, 我们从新颖的角度重新考虑防御, 即模型重量变化频率。 随机地, 我们得到一种新的洞察力, 在FL培训期间, 自由驾驶者和友好客的模型重量变化频率有很大不同。 受这种洞察的启发, 我们提出一种新型防御方法, 以模型WEF- Deference 频率为基础, 即WEF- Deference。 具体地说, 我们首先从地方培训中收集正在变化的频率( 定义为WEF- Matrix) 。 对于每个客户来说, 它上传当地模型的WEF- Matrix, 以及每升级的模型重量。 服务器随后将自由驾驶者与友好客和友好客分开, 依据服务器的不同功能客户, 在服务器的模型上, 提供了一种更好的模型, 而不是全基服务器的模型。