Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates. Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness. Our experiments show that both DP variants do d fend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference. Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL.
翻译:联邦学习联合会(FL)允许多位参与者通过保持本地数据集来合作培训机器学习模式,同时只交换模型更新。唉,这不一定没有隐私和稳健性的脆弱性,例如通过成员资格、财产和后门攻击。本文调查了是否以及在何种程度上可以使用差异隐私(DP)来保护自由联盟的隐私和稳健性。为此,我们提出了对地方和中央差异隐私(LDP/CDP)技术的首次实物评估,评估其可行性和有效性。我们的实验表明,尽管保护效用交易的程度不同,但这两种变异都能够抵御后门攻击,尽管保护效用交易的程度不同,但比其他稳健防御更有效。DP还减轻了在自由联盟的白箱成员身份的推断攻击,我们的工作是首先以实证方式展示这一点。然而,无论是LDP还是CDP,都没有针对财产推断进行辩护。总体而言,我们的工作提供了一个全面、可重复的计量方法,以量化强健/抵押与差异私人自由联盟的利得失利之间的交易。