Federated learning (FL) is a novel learning paradigm that addresses the privacy leakage challenge of centralized learning. However, in FL, users with non-independent and identically distributed (non-IID) characteristics can deteriorate the performance of the global model. Specifically, the global model suffers from the weight divergence challenge owing to non-IID data. To address the aforementioned challenge, we propose a novel diffusion strategy of the machine learning (ML) model (FedDif) to maximize the FL performance with non-IID data. In FedDif, users spread local models to neighboring users over D2D communications. FedDif enables the local model to experience different distributions before parameter aggregation. Furthermore, we theoretically demonstrate that FedDif can circumvent the weight divergence challenge. On the theoretical basis, we propose the communication-efficient diffusion strategy of the ML model, which can determine the trade-off between the learning performance and communication cost based on auction theory. The performance evaluation results show that FedDif improves the test accuracy of the global model by 10.37% compared to the baseline FL with non-IID settings. Moreover, FedDif improves the number of consumed sub-frames by 1.28 to 2.85 folds to the latest methods except for the model compression scheme. FedDif also improves the number of transmitted models by 1.43 to 2.67 folds to the latest methods.
翻译:联邦学习(FL)是一个解决中央学习隐私泄漏挑战的新学习模式,它解决了中央学习的隐私泄漏挑战。但在FL中,非独立和分布相同的(非IID)特性用户可能会使全球模型的性能恶化。具体地说,全球模型因非IID数据而面临重量差异的挑战。为了应对上述挑战,我们提出了机器学习(ML)模型(FedDif)的新传播战略,以利用非IID数据最大限度地实现FL的性能。在FedDif中,用户将本地模型向邻接用户传播D2D通信。FedDif使当地模型能够在参数汇总之前经历不同的分布。此外,我们理论上证明FedDif可以绕过重量差异的挑战。根据理论,我们提出了ML模型的通信效率传播战略,根据拍卖理论确定学习业绩和通信成本之间的取舍。在FedDif中,将全球模型的测试精确度提高10.37 %,比基线FL与非IID环境的通信。此外,FDif公司还可以在理论上证明FDif可以规避不同分布的不同分布。此外,FedDif可以绕过重量差异的挑战,我们提出了MFD模式的更新版本模式的版本格式格式格式格式格式格式格式格式格式格式格式更新格式格式格式,但更新格式的最新版本第1至第2版第2版第2版第2版第28号。