Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS). Unlike recent model poisoning attacks that optimize the amplitude of malicious perturbations along certain prescribed directions to cause DoS, we propose a Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals. We consider a practical threat scenario where no extra knowledge about the FL system (e.g., aggregation rules or updates on benign devices) is available to adversaries. FMPA exploits the global historical information to construct an estimator that predicts the next round of the global model as a benign reference. It then fine-tunes the reference model to obtain the desired poisoned model with low accuracy and small perturbations. Besides the goal of causing DoS, FMPA can be naturally extended to launch a fine-grained controllable attack, making it possible to precisely reduce the global accuracy. Armed with precise control, malicious FL service providers can gain advantages over their competitors without getting noticed, hence opening a new attack surface in FL other than DoS. Even for the purpose of DoS, experiments show that FMPA significantly decreases the global accuracy, outperforming six state-of-the-art attacks.
翻译:联邦学习(Federated Learning, FL)容易受到污染攻击,攻击者意图损坏全局聚合结果并引起拒绝服务(Denial-of-Service, DoS)。不同于最近优化恶意扰动沿着某些预设方向挑起DoS的模型污染攻击,我们提出一种灵活的模型污染攻击(Flexible Model Poisoning Attack, FMPA),能够实现多样化的攻击目标。我们考虑一种实际的威胁情景,即攻击者不具备关于FL系统的额外知识(例如聚合规则或良性设备上的更新)。FMPA利用全局历史信息构建一个估计器,预测下一轮全局模型作为良性参考。然后,它微调参考模型以获得低精度和小扰动的所需污染模型。除了引起DoS的目的外,FMPA自然地扩展到启动细粒度可控攻击,这使得精确降低全局准确度成为可能。在精确的控制下,恶意的FL服务提供者可以在不被注意的情况下获得优势,从而在FL中开辟了一个新的攻击面除DoS外。即使是为了实现DoS,实验表明,FMPA明显降低了全局准确度,优于六种最先进的攻击手段。