To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharper loss landscape and have poorer weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with better stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. From the theoretical perspective, we analyze in detail how DP-FedSAM mitigates the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with R\'enyi DP and present the sensitivity analysis of local updates. At last, we empirically confirm that our algorithm achieves state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
翻译:为了防御推理攻击和减轻隐私信息泄露,客户端DPFL(差分隐私联邦学习)是隐私保护的实际标准,该模型通过剪辑本地更新和添加随机噪声。然而,现有的DPFL方法往往会使损失平面更尖锐,具有更差的权重扰动鲁棒性,导致严重的性能下降。为了缓解这些问题,我们提出了一种新的DPFL算法,名为DP-FedSAM,它利用梯度扰动来减轻DP的负面影响。DP-FedSAM集成了Sharpness Aware Minimization(SAM)优化器,生成具有更好稳定性和权重扰动鲁棒性的本地平坦模型,从而减小了本地更新的范数和DP噪声的鲁棒性,提高了性能。从理论角度来看,我们详细分析了DP-FedSAM如何缓解DP引起的性能下降。同时,我们提供严格的Rényi DP隐私保证,并呈现本地更新的灵敏度分析。最后,我们经验证实,与现有的SOTA基线相比,我们的算法在DPFL中实现了最先进的性能。