Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
翻译:联邦学习(FL)使分布式代理商能够合作学习中央模型,而不必相互分享原始数据。然而,数据地点并不能提供足够的隐私保护,因此最好能以严格的差别隐私保障(DP)为FL提供方便。现有的DP机制将引入与模型大小成比例的随机噪音,这种噪音在深层神经网络中可能相当大。我们在此文件中提出一个新的FL框架,带有封闭性强化隐私。我们的方法将随机过滤与每个代理商的梯度扰动结合起来,以扩大隐私保障。由于过滤将增加实现某种目标准确性所需的通信轮数,这对DP的保障不利,我们进一步引入加速技术,帮助降低隐私成本。我们严格分析我们的方法的趋同,并利用Renyi DP来严格考虑终端到终端的DP保证。关于基准数据集的广泛试验证实我们的方法在隐私保障和通信效率两方面都超过了以前的差别式私人FL方法。