Federated learning (FL) enables collaborative model training without centralizing data. However, the traditional FL framework is cloud-based and suffers from high communication latency. On the other hand, the edge-based FL framework that relies on an edge server co-located with access point for model aggregation has low communication latency but suffers from degraded model accuracy due to the limited coverage of edge server. In light of high-accuracy but high-latency cloud-based FL and low-latency but low-accuracy edge-based FL, this paper proposes a new FL framework based on cooperative mobile edge networking called cooperative federated edge learning (CFEL) to enable both high-accuracy and low-latency distributed intelligence at mobile edge networks. Considering the unique two-tier network architecture of CFEL, a novel federated optimization method dubbed cooperative edge-based federated averaging (CE-FedAvg) is further developed, wherein each edge server both coordinates collaborative model training among the devices within its own coverage and cooperates with other edge servers to learn a shared global model through decentralized consensus. Experimental results based on benchmark datasets show that CFEL can largely speed up the convergence speed and reduce the training time to achieve a target model accuracy compared with prior FL frameworks.
翻译:联邦学习(FL)使得合作模式培训无需集中数据,但传统FL框架以云为基础,具有高通信延迟度。另一方面,依赖边端服务器的边端FL框架与模型聚合的接入点合用同一地点的接入点,这种边端服务器的通信延迟度较低,但由于边端服务器覆盖面有限,模型精确度降低。鉴于高精度但高密度的云端基于高密度的FL和低密度的低密度的FL和低密度的边端基于FL,本文提出一个新的FL框架,以合作移动边缘网络为基础,称为合作联合边端学习(CFEL),使移动边端网络的高度准确性和低延迟分布情报都得以使用。考虑到CFEL独特的双层网络结构,一种新型的联邦化优化方法在基于边端端的合作边端平均率(CE-FedAvg)方面有所改进,每个边端服务器都在其自己的覆盖范围内协调合作模式培训,并与其他边端服务器合作,通过分散化的远端边端网络,通过分散的远端边际边端学习全球模式,通过移动边端边端边边边边边学习,通过移动边边边网络学习高精度的边边边边学习,通过移动边边边端网络传播的边学习,在移动网络学习,在移动边边边边边边边端网络传播的边际学习,在移动边边际学习,通过分散的边际学习高清晰度学习高清晰的共识化的边学习。实验性学习。实验性学习速度,根据基准点共识,在前试验结果,可以显示速度,先标准速度,在基准框架比较比较比较比较比较,试验结果,在基准式训练,在基准式训练,试验结果,可以显示C,在基准框架可以显示,先比较速度,在基准级训练,在基准式一致,在基准式趋。