Federated Learning (FL) has recently emerged as a popular framework, which allows resource-constrained discrete clients to cooperatively learn the global model under the orchestration of a central server while storing privacy-sensitive data locally. However, due to the difference in equipment and data divergence of heterogeneous clients, there will be parameter deviation between local models, resulting in a slow convergence rate and a reduction of the accuracy of the global model. The current FL algorithms use the static client learning strategy pervasively and can not adapt to the dynamic training parameters of different clients. In this paper, by considering the deviation between different local model parameters, we propose an adaptive learning rate scheme for each client based on entropy theory to alleviate the deviation between heterogeneous clients and achieve fast convergence of the global model. It's difficult to design the optimal dynamic learning rate for each client as the local information of other clients is unknown, especially during the local training epochs without communications between local clients and the central server. To enable a decentralized learning rate design for each client, we first introduce mean-field schemes to estimate the terms related to other clients' local model parameters. Then the decentralized adaptive learning rate for each client is obtained in closed form by constructing the Hamilton equation. Moreover, we prove that there exist fixed point solutions for the mean-field estimators, and an algorithm is proposed to obtain them. Finally, extensive experimental results on real datasets show that our algorithm can effectively eliminate the deviation between local model parameters compared to other recent FL algorithms.
翻译:联邦学习(FL)已成为一种流行的框架,可以协同学习每个客户端的全局模型,在中央服务器的组织下存储本地隐私数据。然而,由于异构客户端的设备差异和数据差异,导致本地模型之间存在参数偏差,进而降低全局模型的准确性和收敛速率。当前的FL算法广泛采用静态的客户端学习策略,无法适应不同客户端的动态训练参数。在本文中,我们通过考虑不同本地模型参数之间的偏差,提出一个基于熵理论的自适应学习速率方案,以减轻异构客户端之间的偏差,并实现全局模型的快速收敛。由于在本地训练时,其他客户端的本地信息是未知的,因此很难为每个客户端设计最佳的动态学习速率。为了实现对每个客户端的分散自适应学习速率设计,我们首先推出平均场方案来估计其他客户端本地模型参数相关的项。然后通过构造Hamilton方程,以闭合形式得到每个客户端的分散自适应学习速率。此外,我们证明了平均场估计器存在定点解,并提出了一个算法来进行求解。最后,在实际数据集上的大量实验结果显示,与其他最新的FL算法相比,我们的算法可以有效消除本地模型参数之间的偏差。