Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server while keeping privacy-sensitive data locally. The non-independent-and-identically-distributed (non-IID) data samples across participating nodes slow model training and impose additional communication rounds for FL to converge. In this paper, we propose Federated Adaptive Weighting (FedAdp) algorithm that aims to accelerate model convergence under the presence of nodes with non-IID dataset. We observe the implicit connection between the node contribution to the global model aggregation and data distribution on the local node through theoretical and empirical analysis. We then propose to assign different weights for updating the global model based on node contribution adaptively through each training round. The contribution of participating nodes is first measured by the angle between the local gradient vector and the global gradient vector, and then, weight is quantified by a designed non-linear mapping function subsequently. The simple yet effective strategy can reinforce positive (suppress negative) node contribution dynamically, resulting in communication round reduction drastically. Its superiority over the commonly adopted Federated Averaging (FedAvg) is verified both theoretically and experimentally. With extensive experiments performed in Pytorch and PySyft, we show that FL training with FedAdp can reduce the number of communication rounds by up to 54.1% on MNIST dataset and up to 45.4% on FashionMNIST dataset, as compared to FedAvg algorithm.
翻译:联邦学习(FL) 使资源受限制的边缘节点能够通过在中央服务器的协调下合作学习一个全球模型,同时保持对隐私敏感的本地数据。 参加节点的不独立和身份分配(非IID)数据样本在参与的节点慢速模式培训中进行不独立和身份分配(非IID)数据样本, 并强行为FL增加通信回合。 在本文中,我们提议了Fredive调重力(FedAdp)算法, 目的是在与非IID数据集的节点存在的情况下加快模式趋同。 我们通过理论和经验分析观察到节点对全球模型集合的贡献和当地节点的数据分配之间的隐含联系。 我们随后提议了节点, 旨在加速模型趋同与非IID数据集的节点的趋同。 简单而有效的战略可以加强积极的节点贡献( 压制负的), 从而大幅降低通信量, 从而导致通信周期性下降。 其优势通过共同的Fal-L 实验, 展示了Fal-A 的高级性, 以及我们所采用的Fal-A 的实验, 测试显示了Fal- hal- hal- a的高级的优势, 和Fal- broal- ta 。