Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server, while keeping the training data locally stored. However, heterogeneous data distributions over different clients remain a challenge to mainstream FL algorithms, which may cause slow convergence, overall performance degradation and unfairness of performance across clients. To address these problems, in this study we propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients. Additionally, we propose to utilize Gini coefficient as the measure of fairness for FL. More importantly, we apply the Gini coefficient and validation accuracy of clients in each communication round to construct a reward function for the reinforcement learning. Our PG-FFL is also compatible to many existing FL algorithms. We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework. The experimental results show that our framework can outperform baseline methods in terms of overall performance, fairness and convergence speed.
翻译:联邦学习(FL)是一个范例,许多客户在中央服务器的协调下合作培训模式,同时保留当地储存的培训数据;然而,不同客户的不同数据分布仍对FL算法主流化构成挑战,这可能导致客户之间的趋同缓慢、总体性能退化和业绩不公;为解决这些问题,我们在本研究报告中提议了一个强化学习框架,称为PG-FFL,自动学习一项政策,为客户分配总重。此外,我们提议利用基尼系数作为FL的公平度量。 更重要的是,我们在每轮通信中使用基尼系数和客户的验证准确度,以构建强化学习的奖励功能。我们的PG-FFL也与许多现有的FL算法兼容。我们对多种数据集进行了广泛的实验,以核实我们框架的有效性。实验结果显示,我们的框架可以超越总体业绩、公平和趋同速度的基线方法。