Federated Learning (FL) is an exciting new paradigm that enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server. Performance of FL in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. A recent work, Coded Federated Learning (CFL), proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server. Coding redundancy in CFL is computed by exploiting statistical properties of compute and communication delays. We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels. The key innovation of our work is to exploit distributed kernel embedding using random Fourier features that transforms the training task into distributed linear regression. We provide an analytical solution for load allocation, and demonstrate significant performance gains for CodedFedL through experiments over benchmark datasets using practical network parameters.
翻译:联邦学习(FL)是一个令人振奋的新范例,它使培训全球模型能够从客户节点当地生成的数据中进行,而不必将客户数据转移到中央服务器。在多接入边缘计算(MEC)网络中,FL的性能因计算客户之间电力和通信联系质量的异质性和差异性波动而缓慢趋同。最近的一项工作(CFL)建议通过在MEC服务器中分配多余的计算来减少分流并加快线性回归任务的培训。CFL的编码冗余是通过利用计算和通信延误的统计属性来计算的。我们开发了代码FedL,以解决扩展CFL以分布非线性回归和多输出标签分类问题的艰巨任务。我们工作的关键创新是利用分布式内核嵌,利用随机的Fourier特性将培训任务转化为分布式线性回归。我们为工作量分配提供了分析解决方案,并通过使用实用的网络参数对基准数据集进行实验,展示代码FedL的显著绩效收益。