To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharp loss landscape and have poor weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with improved stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. To further reduce the magnitude of random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique. From the theoretical perspective, we present the convergence analysis to investigate how our algorithms mitigate the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with R\'enyi DP, the sensitivity analysis of local updates, and generalization analysis. At last, we empirically confirm that our algorithms achieve state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
翻译:为了防御推断攻击并减少联邦学习中的敏感信息泄露,基于客户端差分隐私的联邦学习(DPFL)已成为隐私保护的事实标准,通过剪切局部更新和添加随机噪声来实现。然而,现有的DPFL方法往往会使损失函数变得尖锐,具有较差的权重扰动鲁棒性,从而导致性能严重降低。为了缓解这些问题,我们提出了一种新的DPFL算法:DP-FedSAM,该算法利用梯度扰动来减轻DP的负面影响。具体而言,DP-FedSAM将锐度感知最小化(SAM)优化器集成到联邦学习中,生成具有改进的稳定性和权重扰动鲁棒性的局部平坦模型,从而导致局部更新范数较小且对DP噪声 Robustness,从而提高性能。为了进一步减少随机噪声的幅度同时实现更好的性能,我们采用局部更新稀疏化技术提出DP-FedSAM-$top_k$。从理论上讲,我们提出了收敛分析,研究了我们的算法如何减轻DP引起的性能降低。同时,我们提供了R\'enyi DP的严格隐私保证,局部更新的敏感性分析和泛化分析。最后,我们经验证实,与现有的SOTA基线相比,我们的算法在DPFL中实现了最先进的性能。