Federated learning (FL) enables collaborative model training without centralizing data. However, the traditional FL framework is cloud-based and suffers from high communication latency. On the other hand, the edge-based FL framework that relies on an edge server co-located with mobile base station for model aggregation has low communication latency but suffers from degraded model accuracy due to the limited coverage of edge server. In light of high accuracy but high-latency cloud-based FL and low-latency but low-accuracy edge-based FL, this paper proposes a new FL framework based on cooperative mobile edge networking called cooperative federated edge learning (CFEL) to enable both high-accuracy and low-latency distributed intelligence at mobile edge networks. Considering the unique two-tier network architecture of CFEL, a novel federated optimization method dubbed cooperative edge-based federated averaging (CE-FedAvg) is further developed, wherein each edge server both coordinates collaborative model training among the devices within its own coverage and cooperates with other edge servers to learn a shared global model through decentralized consensus. Experimental results based on benchmark datasets show that CFEL can largely reduce the training time to achieve a target model accuracy compared with prior FL frameworks.
翻译:联邦学习(FL)使合作模式培训无需集中数据,但传统FL框架以云为基础,具有高通信延迟度。另一方面,依靠边端服务器与移动基站合用基站进行模型聚合的边缘FL框架,通信延迟度较低,但由于边缘服务器覆盖面有限,模型精确度降低。鉴于高精度但高密度的云基FL和低延迟但低准确度的边基FL,本文提出一个新的FL框架,以合作移动边缘网络为基础,称为合作联邦边缘学习(CFEL),使在移动边缘网络上高精度和低时间分布的智能都能使用。考虑到CFEFEL独特的双层网络结构,一种新型的联邦优化优化方法由于边端服务器覆盖面有限,被置于合作边基平均率之下(CE-FedAvg),每个边服务器都在其覆盖范围内协调各设备之间的协作模式培训,并与其他边端服务器合作,通过分散化的全球模型学习,通过移动边端边缘边边边边边边学习。根据基准框架,实验结果可以比前目标框架,降低CFFEL的进度。