Communication costs within Federated learning hinder the system scalability for reaching more data from more clients. The proposed FL adopts a hub-and-spoke network topology. All clients communicate through the central server. Hence, reducing communication overheads via techniques such as data compression has been proposed to mitigate this issue. Another challenge of federated learning is unbalanced data distribution, data on each client are not independent and identically distributed (non-IID) in a typical federated learning setting. In this paper, we proposed a new compression compensation scheme called Global Momentum Fusion (GMF) which reduces communication overheads between FL clients and the server and maintains comparable model accuracy in the presence of non-IID data. GitHub repository: https://github.com/tony92151/global-momentum-fusion-fl
翻译:联邦学习内部的通信成本妨碍了从更多的客户获取更多数据的系统可扩缩性。拟议FL采用一个枢纽式网络表层。所有客户都通过中央服务器进行通信。因此,提议通过数据压缩等技术减少通信间接费用,以缓解这一问题。联邦学习的另一个挑战是数据分配不平衡,每个客户的数据在典型的联邦学习环境中不是独立和同样分布(非IID)。在本文中,我们提议了一个新的压缩补偿计划,称为全球运动融合计划,减少FL客户与服务器之间的通信间接费用,并在非IID数据存在的情况下保持可比的模型准确性。GitHub存储库:https://github.com/tony92151/global-momentum-uncation-flfl。