Recent advances in federated learning have demonstrated its promising capability to learn on decentralized datasets. However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose. This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models. In text classification, less than 1% of adversary clients suffices to manipulate the model output without any drop in the performance on clean sentences. For a less complex dataset, a mere 0.1% of adversary clients is enough to poison the global model effectively. We also propose a technique specialized in the federated learning scheme called Gradient Ensemble, which enhances the backdoor performance in all our experimental settings.
翻译:联合会学习的最近进展表明,它有潜力学习分散的数据集,但是,大量工作引起了人们的关切,因为对手参与为对抗目的毒化全球模型的框架的潜在风险。本文调查了通过NLP模式的稀有字嵌入模式对幕后攻击施毒模式的可行性。在文本分类中,不到1%的对手客户足以操纵模型输出而不在干净的句子上出现任何下降。对于不那么复杂的数据集,仅有0.1%的对手客户足以有效毒害全球模型。我们还提议了名为 " 进步恩斯姆布勒 " 的联邦学习计划中的专门技术,它能提高我们所有实验环境中的后门表现。