Federated learning is a widely used distributed deep learning framework that protects the privacy of each client by exchanging model parameters rather than raw data. However, federated learning suffers from high communication costs, as a considerable number of model parameters need to be transmitted many times during the training process, making the approach inefficient, especially when the communication network bandwidth is limited. This article proposes RingFed, a novel framework to reduce communication overhead during the training process of federated learning. Rather than transmitting parameters between the center server and each client, as in original federated learning, in the proposed RingFed, the updated parameters are transmitted between each client in turn, and only the final result is transmitted to the central server, thereby reducing the communication overhead substantially. After several local updates, clients first send their parameters to another proximal client, not to the center server directly, to preaggregate. Experiments on two different public datasets show that RingFed has fast convergence, high model accuracy, and low communication cost.
翻译:联邦学习是一个广泛使用的深层学习框架,它通过交换模型参数而不是原始数据来保护每个客户的隐私。然而,联邦学习受到高昂的通信成本的影响,因为大量模型参数需要在培训过程中多次传递,使方法效率低下,特别是在通信网络带宽有限的情况下。本篇文章提议了RingFed,这是一个在联邦学习培训过程中减少通信间接费用的新框架。在拟议的RingFed中,没有像最初的联邦学习那样,在中心服务器和每个客户之间传输参数,而是在最初的联邦学习中,更新后的参数在每个客户之间传递,只有最终结果被传送到中央服务器,从而大大减少通信管理费。在几次当地更新后,客户首先将其参数发送给另一个准客户,而不是直接发送给中心服务器,以预先汇总。在两个不同的公共数据集上进行的实验显示,RingFed快速趋同、高模型精度和低通信成本。