In the federated learning scenario, geographically distributed clients collaboratively train a global model. Data heterogeneity among clients significantly results in inconsistent model updates, which evidently slow down model convergence. To alleviate this issue, many methods employ regularization terms to narrow the discrepancy between client-side local models and the server-side global model. However, these methods impose limitations on the ability to explore superior local models and ignore the valuable information in historical models. Besides, although the up-to-date representation method simultaneously concerns the global and historical local models, it suffers from unbearable computation cost. To accelerate convergence with low resource consumption, we innovatively propose a model regularization method named FedTrip, which is designed to restrict global-local divergence and decrease current-historical correlation for alleviating the negative effects derived from data heterogeneity. FedTrip helps the current local model to be close to the global model while keeping away from historical local models, which contributes to guaranteeing the consistency of local updates among clients and efficiently exploring superior local models with negligible additional computation cost on attaching operations. Empirically, we demonstrate the superiority of FedTrip via extensive evaluations. To achieve the target accuracy, FedTrip outperforms the state-of-the-art baselines in terms of significantly reducing the total overhead of client-server communication and local computation.
翻译:在联邦学习场景下,分布在各地的客户端共同训练全局模型。客户端之间的数据异构性显著导致了不一致的模型更新,显然拖慢了模型的收敛。为了缓解这个问题,许多方法采用正则化项来缩小客户端本地模型和服务器全局模型之间的差异。然而,这些方法限制了探索更好的本地模型的能力,并忽略了历史模型所包含的宝贵信息。此外,尽管最新的表示方法同时关注全局和历史本地模型,但它承受着不可承受的计算成本。为了加速收敛并降低资源消耗,我们创新性地提出了一种名为FedTrip的模型正则化方法,旨在限制全局和本地发散并减少当前和历史之间的相关性,以缓解由数据异构性导致的负面影响。 FedTrip帮助当前本地模型接近全局模型,同时远离历史本地模型,这有助于保证客户端之间的本地更新的一致性并高效地探索出优秀的本地模型,额外的连接操作几乎不会带来计算成本。经验性地,我们通过广泛的评估展示了FedTrip的优越性。为了达到目标准确性,FedTrip在减少客户端-服务器通信和本地计算的总开销方面,优于现有的基线。