In this work, besides improving prediction accuracy, we study whether personalization could bring robustness benefits to backdoor attacks. We conduct the first study of backdoor attacks in the pFL framework, testing 4 widely used backdoor attacks against 6 pFL methods on benchmark datasets FEMNIST and CIFAR-10, a total of 600 experiments. The study shows that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks. In contrast, pFL methods with full model-sharing do not show robustness. To analyze the reasons for varying robustness performances, we provide comprehensive ablation studies on different pFL methods. Based on our findings, we further propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks. We believe that our work could provide both guidance for pFL application in terms of its robustness and offer valuable insights to design more robust FL methods in the future.
翻译:在这项工作中,除了提高预测的准确性外,我们还研究个性化是否会给后门攻击带来强力的好处。我们在PFL框架中进行第一次关于后门攻击的研究,对基准数据集FEMNIST和CIFAR-10中6个PFL方法的4种广泛使用的后门攻击测试,总共600次实验。研究表明部分分享模型的pFL方法能够大大增强对付后门攻击的强力。相比之下,完全分享模型的pFL方法并不显示强力。为了分析不同强力性能表现的原因,我们提供了不同PFL方法的全面反动研究。根据我们的调查结果,我们进一步提出了轻量防御方法,即简单图灵,从经验上改进防御对后门攻击的性能。我们认为,我们的工作既可以为PFLF的强力应用提供指导,也为今后设计更强力的FL方法提供宝贵的见解。