Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks, even if some existing defense strategies are used. To address this challenge, we propose an attack-adaptive aggregation strategy to defend against various attacks for robust federated learning. The proposed approach is based on training a neural network with an attention mechanism that learns the vulnerability of federated learning models from a set of plausible attacks. To the best of our knowledge, our aggregation strategy is the first one that can be adapted to defend against various attacks in a data-driven fashion. Our approach has achieved competitive performance in defending model poisoning and backdoor attacks in federated learning tasks on image and text datasets.
翻译:联邦学习很容易受到各种攻击,例如示范性中毒和后门攻击,即使使用了一些现有的防御战略。为了应对这一挑战,我们建议采取进攻性适应性综合战略,防范各种攻击,以进行有力的联邦学习。拟议方法的基础是培训神经网络,其关注机制从一系列可信的攻击中了解联合会学习模式的脆弱性。据我们所知,我们的合并战略是第一个能够适应以数据驱动的方式防御各种攻击的战略。我们的方法在保护模型中毒和后门攻击方面取得了竞争性业绩,在图像和文本数据集方面,联合学习任务中,联合学习了类似的任务。