Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this report we introduce \algoname{}, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning.
翻译:联邦学习本身就容易受到中毒袭击的模型袭击,因为其分散性质允许攻击者使用被泄露的装置。在中毒袭击模型袭击中,攻击者通过上传“被污染的”更新信息,降低了模型在目标子任务(如将飞机归类为鸟类)上的性能。在本报告中,我们引入了“algoname ⁇ ”这一新防御方法,即使用全球顶级更新的密封和装置级梯度剪报来减轻中毒袭击模型。我们提出了一个理论框架,用于分析针对中毒袭击的防御是否稳健,并提供关于我们算法的稳健性和趋同性分析。为了验证其经验效果,我们用多种基准数据集进行开放源评估,用于计算机视觉和联合学习。