Personalized Federated Learning (PFL) is a new Federated Learning (FL) paradigm, particularly tackling the heterogeneity issues brought by various mobile user equipments (UEs) in mobile edge computing (MEC) networks. However, due to the ever-increasing number of UEs and the complicated administrative work it brings, it is desirable to switch the PFL algorithm from its conventional two-layer framework to a multiple-layer one. In this paper, we propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks. The UEs in HPFL are divided into multiple clusters, and the UEs in each cluster forward their local updates to the edge server (ES) synchronously for edge model aggregation, while the ESs forward their edge models to the cloud server semi-asynchronously for global model aggregation. The above training manner leads to a tradeoff between the training loss in each round and the round latency. HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation as well as the ES scheduling policy in the hierarchical learning framework. Extensive experiments verify that HPFL not only guarantees convergence in hierarchical aggregation frameworks but also has advantages in round training loss maximization and round latency minimization.
翻译:个性化分层联邦学习(PFL)是一种新的联邦学习(FL)范式,特别是在移动边缘计算(MEC)网络中处理各种移动用户设备(UE)带来的异构问题。 然而,由于UE数量不断增加和它带来的复杂管理工作,将PFL算法从其传统的两层框架转换为多层框架是可取的。 在本文中,我们提出了分层PFL(HPFL),一种用于在海量MEC网络中部署PFL的算法。HPFL将UE分为多个群集,每个群集中的UE同步向边缘服务器(ES)转发其本地更新,以进行边缘模型聚合,而ES半同步地向云服务器转发其边缘模型进行全局模型聚合。上述训练方式导致了每轮训练损失和轮延迟之间的权衡。HPFL在分层学习框架中结合了训练损失最小化和轮延迟最小化的目标,同时联合确定最佳带宽分配和ES调度策略。大量实验验证了HPFL不仅保证了分层聚合框架中的收敛性,而且在轮训练损失最大化和轮延迟最小化方面具有优势。