Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to provide false predictions on specific adversary-chosen inputs. A number of defenses have been proposed; but none of them can effectively protect the FL process also against so-called multi-backdoor attacks in which multiple different backdoors are injected by the adversary simultaneously without severely impacting the benign performance of the aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning defense framework that is able to defend FL against state-of-the-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model. Moreover, FL is also vulnerable to inference attacks, in which a malicious aggregator can infer information about clients' training data from their model updates. To thwart such attacks, we augment FLGUARD with state-of-the-art secure computation techniques that securely evaluate the FLGUARD algorithm. We provide formal argumentation for the effectiveness of our FLGUARD and extensively evaluate it against known backdoor attacks on several datasets and applications (including image classification, word prediction, and IoT intrusion detection), demonstrating that FLGUARD can entirely remove backdoors with a negligible effect on accuracy. We also show that private FLGUARD achieves practical runtimes.
翻译:最近,有人提议对联邦学习联合会(FL)进行一些幕后攻击。在这种攻击中,敌人向联合模式集合进程注入有毒的模型更新模型,目的是操纵综合模型,对具体的对手选择的投入作出虚假预测。提出了若干防守建议;但其中没有一个能有效地保护FL进程,以对付所谓的多不同幕后攻击,即对手同时注入多种不同幕后攻击,同时又不严重影响综合模型的良好性能。为了克服这一挑战,我们引入FLGURARD,这是一个有毒的防御框架,能够保护FL不受最先进的幕后攻击,同时维持综合模型的良好性能。此外,FL还容易受到推断性攻击,其中恶意隔离器可以从其模型更新中推断客户培训数据的信息。为了挫败这种攻击,我们用可靠地评价FLGURD算法的状态安全计算技术加强FLGURRD。我们为FLGURD的准确性攻击效力提供了正式的论据,我们FLGUR的精确性应用也能够完全消除FLRD和广泛显示IGUR的数值。