Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network. Though FL enables a privacy-preserving mobile edge computing framework using IoT devices, recent studies have shown that this approach is susceptible to poisoning attacks from the side of remote clients. To address the poisoning attacks on FL, we provide a \textit{two-phase} defense algorithm called {Lo}cal {Ma}licious Facto{r} (LoMar). In phase I, LoMar scores model updates from each remote client by measuring the relative distribution over their neighbors using a kernel density estimation method. In phase II, an optimal threshold is approximated to distinguish malicious and clean updates from a statistical perspective. Comprehensive experiments on four real-world datasets have been conducted, and the experimental results show that our defense strategy can effectively protect the FL system. {Specifically, the defense performance on Amazon dataset under a label-flipping attack indicates that, compared with FG+Krum, LoMar increases the target label testing accuracy from $96.0\%$ to $98.8\%$, and the overall averaged testing accuracy from $90.1\%$ to $97.0\%$.
翻译:联邦学习(FL) 提供了一个高效的分散式机器学习框架, 培训数据仍然在网络中的远程客户中分布。 虽然 FL允许使用IOT设备建立一个隐私保护移动边缘计算框架, 但最近的研究表明, 这种方法很容易从远程客户的侧面中毒攻击。 为了应对FL的中毒袭击, 我们提供了名为{Lo}cal {Ma}licious Facto{r}(LoMar)的防御算法。 在第一阶段, 每一个远程客户通过使用内核密度估计方法测量其邻居的相对分布, 更新 Lomar 分数模型。 在第二阶段, 最理想的门槛是从统计角度区分恶意和清洁更新。 已经对四个真实世界数据集进行了全面实验,实验结果显示,我们的防御战略能够有效保护FL系统。 {Speatflipping攻击下亚马孙数据集的防御表现显示, 与FG+Krum相比, Lomar 将目标标签测试精确度从96.0美元提高到98. 8美元, 和整个平均测试从90. 1美元提高到9美元。