Federated learning (FL) that enables edge devices to collaboratively learn a shared model while keeping their training data locally has received great attention recently and can protect privacy in comparison with the traditional centralized learning paradigm. However, sensitive information about the training data can still be inferred from model parameters shared in FL. Differential privacy (DP) is the state-of-the-art technique to defend against those attacks. The key challenge to achieving DP in FL lies in the adverse impact of DP noise on model accuracy, particularly for deep learning models with large numbers of parameters. This paper develops a novel differentially-private FL scheme named Fed-SMP that provides a client-level DP guarantee while maintaining high model accuracy. To mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages a new technique called Sparsified Model Perturbation (SMP) where local models are sparsified first before being perturbed by Gaussian noise. We provide a tight end-to-end privacy analysis for Fed-SMP using Renyi DP and prove the convergence of Fed-SMP with both unbiased and biased sparsifications. Extensive experiments on real-world datasets are conducted to demonstrate the effectiveness of Fed-SMP in improving model accuracy with the same DP guarantee and saving communication cost simultaneously.
翻译:使边际学习(FL)能够使边际设备能够合作学习共同模式,同时将其培训数据保留在本地,这种边际学习机制最近受到极大关注,并且能够与传统的集中学习模式相比保护隐私。然而,从FL共享的模型参数中仍然可以推断出有关培训数据的敏感信息。 差异隐私(DP)是防范这些攻击的最先进技术。在FL中实现DP的关键挑战在于DP噪音对模型准确性的不利影响,特别是对于具有大量参数的深层次学习模型。本文开发了一个新的差别化私营FD-SMP计划,名为FD-SMP,提供客户级DP级保障,同时保持较高的模型准确性。为了减轻隐私保护对模型准确性的影响,Fed-SMP利用了一种叫做Sparsized模型(SMP)的新技术,即当地模型在被高斯噪音扰动之前先受到震动。我们用Reny DPD为Fed-SMP提供了严格的端到端间隐私分析,并证明Fed-SMP与不偏袒和有偏向和偏颇的封闭的组合组合组合组合组合。在真实性数据保证中进行广泛的实验,同时显示Fed-DP-de-de-madepress的精确性试验。