Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success. However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model. Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack robustness certification. This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors. Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude. Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations. Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is available at https://github.com/AI-secure/CRFL.
翻译:联邦学习联合会(FL)是一个分布式学习模式,汇集来自不同客户的信息,以培训一个共享的全球模式,已经显示出巨大的成功,然而,恶意客户可以进行中毒袭击和模型替换,将后门引入经过培训的全球模式。尽管已经进行了密集研究,设计了强大的集成方法和针对后门的实证强的联邦培训协议,但现有方法缺乏强健性认证。本文提供了第一个总体框架,即可核查的罗必斯特联邦学习联合会(CRFL),以对后门进行可靠的FL模型。我们的方法利用剪贴和滑动模型参数来控制全球模型的顺畅性,从而在数量有限的后门上产生一种抽样的稳健性认证。我们的认证还指明了与联邦学习参数的关系,例如实例中毒率、攻击者人数和培训范围。实际上,我们在一系列联邦化数据集中进行全面实验,并提供了在联邦学习中核证的后门攻击是否稳健性的第一个基准。我们的代码可在https://github.com/AI-secure/CRFL。