Data generated at the network edge can be processed locally by leveraging the paradigm of edge computing (EC). Aided by EC, decentralized federated learning (DFL), which overcomes the single-point-of-failure problem in the parameter server (PS) based federated learning, is becoming a practical and popular approach for machine learning over distributed data. However, DFL faces two critical challenges, \ie, system heterogeneity and statistical heterogeneity introduced by edge devices. To ensure fast convergence with the existence of slow edge devices, we present an efficient DFL method, termed FedHP, which integrates adaptive control of both local updating frequency and network topology to better support the heterogeneous participants. We establish a theoretical relationship between local updating frequency and network topology regarding model training performance and obtain a convergence upper bound. Upon this, we propose an optimization algorithm, that adaptively determines local updating frequencies and constructs the network topology, so as to speed up convergence and improve the model accuracy. Evaluation results show that the proposed FedHP can reduce the completion time by about 51% and improve model accuracy by at least 5% in heterogeneous scenarios, compared with the baselines.
翻译:在网络边缘产生的数据可以通过边际计算模式(EC)进行本地处理。在EC的帮助下,分散化的联邦学习(DFL)克服了参数服务器(PS)基于联合学习的单一故障点问题,正在成为针对分布数据进行机器学习的实用和流行的方法。然而,DFL面临着两个关键的挑战,即:\ie、系统差异性和统计多样性。为了确保与慢边装置的快速融合,我们提出了一个高效的DFLF方法,称为FEHP,它结合了本地更新频率和网络地形的适应性控制,以便更好地支持不同参与者。我们建立了本地更新频率和网络地形学之间的理论关系,关于模型培训性能,并获得了高度一致。在此之后,我们提出一个优化算法,以适应性的方式确定本地更新频率,并构建网络地形学,以便加速聚合并提高模型的准确性。评价结果显示,拟议的FEFHPPP可以将完成时间减少约51%,并在与基线相比,在混合情景中至少提高5%的模型准确性。