The longstanding goals of federated learning (FL) require rigorous privacy guarantees and low communication overhead while holding a relatively high model accuracy. However, simultaneously achieving all the goals is extremely challenging. In this paper, we propose a novel framework called hierarchical federated learning (H-FL) to tackle this challenge. Considering the degradation of the model performance due to the statistic heterogeneity of the training data, we devise a runtime distribution reconstruction strategy, which reallocates the clients appropriately and utilizes mediators to rearrange the local training of the clients. In addition, we design a compression-correction mechanism incorporated into H-FL to reduce the communication overhead while not sacrificing the model performance. To further provide privacy guarantees, we introduce differential privacy while performing local training, which injects moderate amount of noise into only part of the complete model. Experimental results show that our H-FL framework achieves the state-of-art performance on different datasets for the real-world image recognition tasks.
翻译:联邦学习的长期目标(FL)要求严格的隐私保障和低通信管理费,同时保持较高的模型准确性。然而,同时实现所有目标是极具挑战性的。在本文件中,我们提议了一个称为等级联邦学习的新框架,以迎接这一挑战。考虑到由于培训数据的统计差异导致模型性能下降,我们设计了一个运行时间分配重建战略,适当重新分配客户,并利用调解员重新安排客户的当地培训。此外,我们设计了一个纳入H-FL的压缩校正机制,以减少通信管理费,同时不牺牲模型性能。为了进一步提供隐私保障,我们在进行地方培训时引入了差异隐私,将适度的噪音注入整个模型的一部分。实验结果显示,我们的H-FL框架在现实世界图像识别任务的不同数据集上实现了最先进的业绩。