Client-wise heterogeneity is one of the major issues that hinder effective training in federated learning (FL). Since the data distribution on each client may vary dramatically, the client selection strategy can largely influence the convergence rate of the FL process. Active client selection strategies are popularly adopted in recent studies. However, they neglect the loss correlations between the clients and achieve marginal improvement compared to the uniform selection strategy. In this work, we propose FedGP -- a federated learning framework built on a correlation-based client selection strategy, to boost the convergence rate of FL. Specifically, we first model the loss correlations between the clients with a Gaussian Process (GP). To make the GP training practical in the communication-bounded FL process, we develop a GP training method to reduce the communication cost by utilizing the covariance stationarity. Finally, based on the correlations we learned, we derive a client selection strategy with an enlarged reduction of expected global loss in each round. Our experimental results show that compared to the latest active client selection strategy, FedGP can improve the convergence rates by $1.3\sim2.0\times$ and $1.2\sim1.5\times$ on FMNIST and CIFAR-10, respectively.
翻译:客户间差异是妨碍联邦学习(FL)有效培训的主要问题之一。由于每个客户的数据分布可能差异很大,客户选择战略可以在很大程度上影响FL进程的趋同率。积极的客户选择战略在最近的研究中被广泛采用。但是,它们忽视了客户间的损失关系,与统一选择战略相比,实现了微小的改进。在这项工作中,我们提议FedGP -- -- 建立在基于相关客户选择战略基础上的联邦学习框架,以提升FL的趋同率。具体地说,我们首先以Gossian进程客户间的损失相关关系为模型。为了使GP培训在受通信限制FL进程中具有实用性,我们开发了一个GP培训方法,以利用共变率降低通信成本。最后,根据我们所了解的相互关系,我们推出一个客户选择战略,扩大每轮预期全球损失的减少幅度。我们的实验结果显示,与最新的积极客户选择战略相比,FGPDGP可以分别用1.3\sim2.times$和1.2\FAR-15/10ims时间提高汇合率。