Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates. At the same time, the attack power of an individual user is limited because their updates are quickly drowned out by those of many other users. Existing attacks do not account for future behaviors of other users, and thus require many sequential updates and their effects are quickly erased. We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients, and ensures that backdoors are effective quickly and persist even after multiple rounds of community updates. We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds and demonstrate this attack on image classification, next-word prediction, and sentiment analysis.
翻译:联邦学习特别容易成为中毒和后门攻击的模型,因为个人用户直接控制培训数据和模式更新。与此同时,个人用户的攻击力有限,因为其更新很快被许多其他用户的用户淹没。现有的攻击并不说明其他用户的未来行为,因此需要许多顺序更新,其影响也很快被消除。我们提议进行攻击,预测和说明整个联邦学习管道,包括其他客户的行为,并确保后门迅速有效,即使在多轮社区更新之后也持续存在。我们表明,在现实情况下,这种新的攻击是有效的,攻击者只协助一小部分随机抽样的子弹,并展示这种攻击对图像分类、下词预测和情绪分析的攻击。