Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancy between global and local objectives, making the FL model slow to converge. In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient. Then, we proposed a Probabilistic Node Selection framework (FedPNS) to dynamically change the probability for each node to be selected based on the output of Optimal Aggregation. FedPNS can preferentially select nodes that propel faster model convergence. The unbiasedness of the proposed FedPNS design is illustrated and the convergence rate improvement of FedPNS over the commonly adopted Federated Averaging (FedAvg) algorithm is analyzed theoretically. Experimental results demonstrate the effectiveness of FedPNS in accelerating the FL convergence rate, as compared to FedAvg with random node selection.
翻译:联邦学习联合会(FL)是一个分布式的学习模式,它使大量资源有限的节点能够合作培训一个没有数据共享的模型。非独立和身份分散(noni.i.d.d.)的数据样本引用了全球和地方目标之间的差异,使得FL模型的趋同速度缓慢。在本文件中,我们提议了最佳聚合算法,以更好地汇总,通过检查本地梯度与全球梯度之间的关系,查明和排除不利的当地更新,从而发现每个全球回合的参与节点的地方更新最佳部分,从而查明和排除地方上的不利更新。然后,我们提议了一个概率性节点选择框架(FedPPNS),以动态方式改变根据最佳聚合输出选择每个节点的可能性。 FedPNS可以优先选择能够推动更快模式趋同的节点。 拟议的FPPNS设计的公正性得到了说明,而且FPPNS比通常采用的Fed Avering (FedAvg) 运算法的趋同率得到了改进。实验性结果表明FDPNS在加速FDL比率方面的效力。