Federated Learning (FL) is pervasive in privacy-focused IoT environments since it enables avoiding privacy leakage by training models with gradients instead of data. Recent works show the uploaded gradients can be employed to reconstruct data, i.e., gradient leakage attacks, and several defenses are designed to alleviate the risk by tweaking the gradients. However, these defenses exhibit weak resilience against threatening attacks, as the effectiveness builds upon the unrealistic assumptions that deep neural networks are simplified as linear models. In this paper, without such unrealistic assumptions, we present a novel defense, called Refiner, instead of perturbing gradients, which refines ground-truth data to craft robust data that yields sufficient utility but with the least amount of privacy information, and then the gradients of robust data are uploaded. To craft robust data, Refiner promotes the gradients of critical parameters associated with robust data to close ground-truth ones while leaving the gradients of trivial parameters to safeguard privacy. Moreover, to exploit the gradients of trivial parameters, Refiner utilizes a well-designed evaluation network to steer robust data far away from ground-truth data, thereby alleviating privacy leakage risk. Extensive experiments across multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art threats.
翻译:联邦学习组织(FL)在以隐私为重点的IoT环境中十分普遍,因为它通过使用梯度而不是数据来培训模型来避免隐私泄漏,因此可以避免隐私泄漏。最近的工作显示,上传梯度可以用来重建数据,即梯度泄漏攻击,而一些防御系统的设计是为了通过调整梯度来减轻风险。然而,这些防御系统在威胁攻击中表现出薄弱的抗御力,因为其有效性基于不切实际的假设,即深神经网络被简化为线性模型。在本文中,没有这种不切实际的假设,我们提出了一个新颖的防御系统,称为Refiner,而不是扭曲梯度,它改进了地面真相数据,以构建强有力的数据,产生足够的效用,但隐私信息量最少,然后将强健数据的梯度上传。为了构建稳健的数据,Refiner促进与紧固数据相关的关键参数梯度的梯度梯度梯度,同时将微参数的梯度用于保护隐私。此外,为了利用微梯度参数的梯度,Refer利用精心设计的评价网络来引导稳健的数据,远离高基度的隐私风险。