Federated Learning (FL) provides a promising distributed learning paradigm, since it seeks to protect users privacy by not sharing their private training data. Recent research has demonstrated, however, that FL is susceptible to model inversion attacks, which can reconstruct users' private data by eavesdropping on shared gradients. Existing defense solutions cannot survive stronger attacks and exhibit a poor trade-off between privacy and performance. In this paper, we present a straightforward yet effective defense strategy based on obfuscating the gradients of sensitive data with concealing data. Specifically, we alter a few samples within a mini batch to mimic the sensitive data at the gradient levels. Using a gradient projection technique, our method seeks to obscure sensitive data without sacrificing FL performance. Our extensive evaluations demonstrate that, compared to other defenses, our technique offers the highest level of protection while preserving FL performance. Our source code is located in the repository.
翻译:联邦学习联盟(FL)提供了一个充满希望的分布式学习模式,因为它试图通过不分享私人培训数据来保护用户隐私,因此它提供了一种有希望的分布式学习模式,因为它试图通过不分享其私人培训数据来保护用户隐私。然而,最近的研究表明,FL很容易以反向攻击为模型,这种攻击可以通过偷听共享梯度来重建用户的私人数据。现有的国防解决方案无法幸存更强的攻击,而且隐私和性能之间的权衡差强人意。在本文中,我们提出了一个直接而有效的防御战略,其基础是用隐藏数据来混淆敏感数据的梯度梯度。具体地说,我们用小批中的一些样本来模仿梯度的敏感数据。我们使用梯度投射技术,试图在不牺牲FL性表现的情况下模糊敏感数据。我们的广泛评估表明,与其他防御方法相比,我们的技术在保存FL性能的同时提供了最高程度的保护。我们的源代码位于存储库中。