The growing interest in intelligent services and privacy protection for mobile devices has given rise to the widespread application of federated learning in Multi-access Edge Computing (MEC). Diverse user behaviors call for personalized services with heterogeneous Machine Learning (ML) models on different devices. Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC. Introducing knowledge distillation into FMTL can simultaneously enable efficient communication and model heterogeneity among clients, whereas existing methods rely on a public dataset, which is impractical in reality. To tackle this dilemma, Federated MultI-task Distillation for Multi-access Edge CompuTing (FedICT) is proposed. FedICT direct local-global knowledge aloof during bi-directional distillation processes between clients and the server, aiming to enable multi-task clients while alleviating client drift derived from divergent optimization directions of client-side local models. Specifically, FedICT includes Federated Prior Knowledge Distillation (FPKD) and Local Knowledge Adjustment (LKA). FPKD is proposed to reinforce the clients' fitting of local data by introducing prior knowledge of local data distributions. Moreover, LKA is proposed to correct the distillation loss of the server, making the transferred local knowledge better match the generalized representation. Experiments on three datasets show that FedICT significantly outperforms all compared benchmarks in various data heterogeneous and model architecture settings, achieving improved accuracy with less than 1.2% training communication overhead compared with FedAvg and no more than 75% training communication round compared with FedGKT.
翻译:由于对智能服务和移动设备隐私保护的兴趣日益浓厚,因此在多存取环境电子计算(MEC)中广泛应用了联合学习。 不同的用户行为要求在不同设备上采用多种机器学习模型,提供个性化服务。 提议采用联邦多任务学习(FMTL),为不同设备培训相关但个性化的ML模型,而以前的工作则在培训期间出现过度的通信间接费用,忽视了MEC设备之间的模型蒸馏过程。 在FMTL基准中引入知识蒸馏,可以同时促进客户之间高效的通信和模型异质性化,而现有方法则依赖于公共数据集,这在现实中是不切实际的。为了应对这一困境,提议采用FMuted Multi-task 蒸馏(FM) 联邦多存取的多元化机械化模型(FMIT) 在客户间进行双向蒸馏时直接的本地全球知识流化,同时减少客户对客户的流化,同时减少客户对客户间不同优化方向的流化,而现有方法则依赖公共数据集。 比较,Fed-Clifilal-al-al-deal-al-ladeal-la dal dalation lade dism dal dal dalation 数据流化数据流化数据流化(比FD) 数据流化数据流化数据流化数据流化数据流化数据流化数据流化更低化为FLIFD) 数据化数据流化数据流化更能更能化,比FLI化更能性化更能化为FDFDFDFD 数据流化数据化, 数据流化为FDFD Bred-SDFDFDFDFDFDFD Bre d d d dald d d dred d dald dald dald d dald dald daldaldaldaldaldaldaldaldald dald dald dald dald dald dald dredald daldaldaldald d daldaldaldaldaldaldaldald dald drealdaldald dald