Federated learning often suffers from unstable and slow convergence due to heterogeneous characteristics of participating clients. Such tendency is aggravated when the client participation ratio is low since the information collected from the clients at each round is prone to be more inconsistent. To tackle the challenge, we propose a novel federated learning framework, which improves the stability of the server-side aggregation step, which is achieved by sending the clients an accelerated model estimated with the global gradient to guide the local gradient updates. Our algorithm naturally aggregates and conveys the global update information to participants with no additional communication cost and does not require to store the past models in the clients. We also regularize local update to further reduce the bias and improve the stability of local updates. We perform comprehensive empirical studies on real data under various settings and demonstrate the remarkable performance of the proposed method in terms of accuracy and communication-efficiency compared to the state-of-the-art methods, especially with low client participation rates. Our code is available at https://github.com/ ninigapa0/FedAGM
翻译:联邦学习由于参与客户的不同特点,往往受到不稳定和缓慢的趋同;如果客户参与率较低,这种趋势就会恶化,因为每轮从客户收集的信息往往更加不一致;为了应对这一挑战,我们提议建立一个新的联合学习框架,通过向客户发送使用全球梯度估算的加速模型,以指导本地梯度更新,提高服务器-侧汇总步骤的稳定性,实现这一框架。我们的算法自然汇总全球更新信息,并将全球更新信息传送给参与者,无需增加通信费用,也不要求将过去的模型存储在客户中。我们还规范本地更新,以进一步减少偏向,提高本地更新的稳定性。我们还对各种环境下的实际数据进行全面的经验性研究,并展示拟议方法在准确性和通信效率方面与最新方法相比的显著业绩,特别是客户参与率低。我们的代码可在https://github.com/niigpa0/FedAGM上查阅。