Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models. However, it was demonstrated that private information could still be inferred by analyzing local model parameters, such as deep neural network model weights. Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much. Typically, in previous work, training parameters were clipped equally and noises were added uniformly. The heterogeneity and convergence of training parameters were simply not considered. In this paper, we propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL). Specifically, due to the gradient heterogeneity, we conduct adaptive gradient clipping for different clients and different rounds; due to the gradient convergence, we add decreasing noises accordingly. Extensive experiments on real-world datasets demonstrate that our Adap DP-FL outperforms previous methods significantly.
翻译:联邦学习联盟力求解决孤立的数据岛屿问题,使客户只披露其当地的培训模式,然而,事实证明,通过分析深神经网络模型重量等当地模型参数,仍然可以推断出私人信息。最近,对联合学习应用了不同的隐私来保护数据隐私,但增加的噪音会大大降低学习的绩效。在以往的工作中,培训参数通常被同样剪裁,而且统一增加了噪音。培训参数的异质性和趋同性根本没有得到考虑。在本文中,我们提出了用适应性噪音(Adap DP-FL)进行联合学习的差别化私人计划。具体地说,由于梯度差异性,我们为不同的客户和不同回合进行适应性梯度剪切;由于梯度的趋同,我们因此增加了噪音的减少。关于现实世界数据集的广泛实验表明,我们的Adap DP-FL大大超越了以前的方法。