Federated Learning (FL) is a privacy-protected machine learning paradigm that allows model to be trained directly at the edge without uploading data. One of the biggest challenges faced by FL in practical applications is the heterogeneity of edge node data, which will slow down the convergence speed and degrade the performance of the model. For the above problems, a representative solution is to add additional constraints in the local training, such as FedProx, FedCurv and FedCL. However, the above algorithms still have room for improvement. We propose to use the aggregation of all models obtained in the past as new constraint target to further improve the performance of such algorithms. Experiments in various settings demonstrate that our method significantly improves the convergence speed and performance of the model.
翻译:联邦学习联合会(FL)是一个保护隐私的机器学习模式,它使模型能够在边缘直接接受培训,而无需上载数据。FL在实际应用中面临的最大挑战之一是边缘节点数据的多样性,这将减缓趋同速度,降低模型的性能。对于上述问题,一种有代表性的解决办法是在当地培训中增加额外的限制,如FedProx、FedCurv和FedCL。然而,上述算法仍有改进的余地。我们提议利用以往获得的所有模型的汇总作为新的制约目标,进一步提高这些算法的性能。各种环境下的实验表明,我们的方法大大改善了模型的趋同速度和性能。