To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharp loss landscape and have poor weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with improved stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. To further reduce the magnitude of random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique. From the theoretical perspective, we present the convergence analysis to investigate how our algorithms mitigate the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with R\'enyi DP, the sensitivity analysis of local updates, and generalization analysis. At last, we empirically confirm that our algorithms achieve state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
翻译:为了保护隐私并减少敏感信息泄露,在联邦学习中采用客户端级别的差分隐私是隐私保护的标准方法。该方法通过裁剪本地更新并添加随机噪声来防御推断攻击和减轻敏感信息泄露。但是,现有的客户端级别差分隐私联邦学习方法倾向于形成陡峭的损失表面,具有较差的权重扰动鲁棒性,从而导致严重的性能下降。为了缓解这些问题,我们提出了一种新的差分隐私联邦学习算法DP-FedSAM,它利用梯度扰动减轻差分隐私的负面影响。具体而言,DP-FedSAM整合了Sharpness Aware Minimization (SAM) 优化器来生成局部平坦性模型,提高了稳定性和权重扰动鲁棒性,从而导致局部更新的小范数和对DP噪声的鲁棒性,进而改善性能。为了进一步降低随机噪声的大小,同时实现更好的性能,我们提出了DP-FedSAM-$top_k$,采用局部更新稀疏化技术。从理论上来看,我们提出了收敛性分析,研究了我们的算法如何缓解DP引起的性能下降。同时,我们给出了Rényi DP的严格隐私保证,局部更新的敏感性分析和泛化性能分析。最后,我们通过实验证实了我们的算法在DPFL中相对于现有的SOTA基线获得了最先进的性能。