As a promising privacy-preserving machine learning method, Federated Learning (FL) enables global model training across clients without compromising their confidential local data. However, existing FL methods suffer from the problem of low inference performance for unevenly distributed data, since most of them rely on Federated Averaging (FedAvg)-based aggregation. By averaging model parameters in a coarse manner, FedAvg eclipses the individual characteristics of local models, which strongly limits the inference capability of FL. Worse still, in each round of FL training, FedAvg dispatches the same initial local models to clients, which can easily result in stuck-at-local-search for optimal global models. To address the above issues, this paper proposes a novel and effective FL paradigm named FedMR (Federating Model Recombination). Unlike conventional FedAvg-based methods, the cloud server of FedMR shuffles each layer of collected local models and recombines them to achieve new models for local training on clients. Due to the fine-grained model recombination and local training in each FL round, FedMR can quickly figure out one globally optimal model for all the clients. Comprehensive experimental results demonstrate that, compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without causing extra communication overhead.
翻译:联邦学习联合会(FL)作为一种充满希望的隐私保护机器学习方法,使客户能够进行全球模式培训,而不损害其保密的当地数据;然而,现有FL方法由于分布不均的数据的推论性能低而存在问题,因为大多数都依赖联邦verage(FedAvg)基于联邦的汇总,因为大多数都依赖基于联邦verage(FedAvg)的汇总。通过以粗略的方式平均模型参数,FedAvg淡化了当地模型的个体特征,这严重限制了FL的推论能力。 更糟的是,FedAvg在每轮FL培训中向客户发送同样的初始本地模型,这很容易导致对最佳全球模型的本地搜索。为了解决上述问题,本文件提出了一个创新和有效的FL模式,即FedAvg的模型(FedMR)基于联邦综合重组的模型集成,与传统的FDMR的云服务器不同,它每个收集的本地模型的每个层都有云流,并重新测量这些模型对客户进行新的当地培训的模型。由于精细的模型重新组合和本地培训,FL每轮的当地培训很容易导致FL的所有FDMRFL的最精确的客户,FD-MR可以迅速展示一个最佳的实验结果。