Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As the local training data come from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is always trained in a differentially private way (DPFL). Thus, in this paper, we ask: Can we leverage the innate privacy property of DPFL to provide certified robustness against poisoning attacks? Can we further improve the privacy of FL to improve such certification? We first investigate both user-level and instance-level privacy of FL and propose novel mechanisms to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, we prove the certified robustness of DPFL under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of attacks on different datasets. We show that DPFL with a tighter privacy guarantee always provides stronger robustness certification in terms of certified attack cost, but the optimal certified prediction is achieved under a proper balance between privacy protection and utility loss.
翻译:联邦学习(FL)为联合培训来自分布式用户的全球模型数据提供了一个有效的范例。由于当地培训数据来自不同用户,而这些用户可能不可信,一些研究显示FL很容易受到中毒袭击。与此同时,为了保护当地用户的隐私,FL总是以不同的私人方式(DPFL)接受培训。因此,我们在本文中询问:我们能否利用DPFL的固有隐私财产来提供经认证的抵御中毒袭击的稳健性?我们能否进一步改进FL的隐私,以改善这种认证?我们首先调查FL的用户和例级隐私,并提议新的机制,以实现更好的例级隐私。我们随后提供了两种稳健性认证标准:经认证的预测和经认证的DPFL在两个级别上的攻击成本。理论上,我们证明DPFL在受约束的众多对立用户或案例中具有经认证的稳健性。我们进行广泛的实验,以便在对不同数据集的一系列攻击下核实我们的理论。我们显示,有更严格的隐私保障,在经认证的攻击成本方面总是提供更稳健的保密性认证,但最佳的保密性预测是在适当的公用率之下实现适当的平衡。