Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates. Alas, recent work highlighted several privacy and robustness weaknesses in FL, presenting, respectively, membership/property inference and backdoor attacks. In this paper, we investigate to what extent Differential Privacy (DP) can be used to protect not only privacy but also robustness in FL. We present a first-of-its-kind empirical evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness. We show that both DP variants do defend against backdoor attacks, with varying levels of protection and utility, and overall much more effectively than previously proposed defenses. They also mitigate white-box membership inference attacks in FL, and our work is the first to show how effectively; neither, however, provides viable defenses against property inference. Our work also provides a re-usable measurement framework to quantify the trade-offs between robustness/privacy and utility in differentially private FL.
翻译:联邦学习组织(FL)允许多位参与者通过保持本地和唯一的交换模式更新数据集,合作培训机器学习模式。 可惜,最近的工作强调了FL的一些隐私和强力弱点,分别显示了会籍/财产推断和后门攻击。 在本文件中,我们调查了差异隐私(DP)在多大程度上不仅可以用来保护隐私,还可以用来保护FL的稳健性。 我们在FL对地方和中央差异隐私(LDP/CDP)技术进行了首次实证性评估,评估其可行性和有效性。我们显示,这两种DP变种都为后门攻击提供了防御,保护程度和效用各不相同,总体而言比先前提议的防御更为有效。它们还减轻了FL的白箱推断攻击,我们的工作是第一个展示其效力的;然而,也没有提供可行的防止财产推断的防御。我们的工作还提供了一个可重新使用的计量框架,以量化差异私人自由L的稳健/抵押和效用之间的权衡。