We study backdoor attacks in peer-to-peer federated learning systems on different graph topologies and datasets. We show that only 5% attacker nodes are sufficient to perform a backdoor attack with 42% attack success without decreasing the accuracy on clean data by more than 2%. We also demonstrate that the attack can be amplified by the attacker crashing a small number of nodes. We evaluate defenses proposed in the context of centralized federated learning and show they are ineffective in peer-to-peer settings. Finally, we propose a defense that mitigates the attacks by applying different clipping norms to the model updates received from peers and local model trained by a node.
翻译:我们在同侪联盟学习系统中研究不同图表地形和数据集的后门攻击。我们显示,只有5%的攻击者节点足以进行后门攻击,42%的攻击成功,而没有将清洁数据的准确性降低2%以上。我们还表明,攻击者撞毁少数节点,可以放大攻击。我们评估在中央联盟学习背景下提出的防御,并表明在同侪学习环境中这些防御无效。最后,我们提出一种防御办法,通过对从同侪和由节点训练的当地模式收到的模型更新适用不同的剪辑规范来减轻攻击。