Federated Learning (FL) is an emerging paradigm through which decentralized devices can collaboratively train a common model. However, a serious concern is the leakage of privacy from exchanged gradient information between clients and the parameter server (PS) in FL. To protect gradient information, clients can adopt differential privacy (DP) to add additional noises and distort original gradients before they are uploaded to the PS. Nevertheless, the model accuracy will be significantly impaired by DP noises, making DP impracticable in real systems. In this work, we propose a novel Noise Information Secretly Sharing (NISS) algorithm to alleviate the disturbance of DP noises by sharing negated noises among clients. We theoretically prove that: 1) If clients are trustworthy, DP noises can be perfectly offset on the PS; 2) Clients can easily distort negated DP noises to protect themselves in case that other clients are not totally trustworthy, though the cost lowers model accuracy. NISS is particularly applicable for FL across multiple IoT (Internet of Things) systems, in which all IoT devices need to collaboratively train a model. To verify the effectiveness and the superiority of the NISS algorithm, we conduct experiments with the MNIST and CIFAR-10 datasets. The experiment results verify our analysis and demonstrate that NISS can improve model accuracy by 21% on average and obtain better privacy protection if clients are trustworthy.
翻译:联邦学习(FL)是一个新兴范例,分散式设备可以通过这种模式合作培训一个共同模式。然而,一个严重关切的问题是客户与FL的参数服务器(PS)之间交换的梯度信息泄露隐私。为了保护梯度信息,客户可以采用差异性隐私(DP)在上传到PS之前添加更多噪音,扭曲原始梯度。然而,模型准确性将受到DP噪音的极大损害,使DP在现实系统中不切实际,使DP在现实系统中不切实际。在这项工作中,我们提议采用新的新颖的噪音保密信息分享算法,通过在客户之间分享否定的噪音来减轻DP噪音的干扰。我们理论上证明:(1) 如果客户可信,DP噪音可以在PS上完全抵消;(2) 客户可以很容易地扭曲否定的DP噪音,以便在其他客户不完全可靠的情况下保护自己。 国家空间服务系统特别适用于多种IoT(Th Intermal)系统FL,在这个系统中,所有IOT设备都需要合作培训一个模型。为了核实国家空间服务局的效能和优越性,我们可以通过21国际空间信息系统的平均数来进行更好的实验来验证。