Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server. Performance of federated learning in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure. CodedFedL enables coded computing for non-linear federated learning by efficiently exploiting distributed kernel embedding via random Fourier features that transforms the training task into computationally favourable distributed linear regression. Furthermore, clients generate local parity datasets by coding over their local datasets, while the server combines them to obtain the global parity dataset. Gradient from the global parity dataset compensates for straggling gradients during training, and thereby speeds up convergence. For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays. We also characterize the leakage in data privacy when clients share their local parity datasets with the server. We analyze the convergence rate and iteration complexity of CodedFedL under simplifying assumptions, by treating CodedFedL as a stochastic gradient descent algorithm. Furthermore, we conduct numerical experiments using practical network parameters and benchmark datasets, where CodedFedL speeds up the overall training time by up to $15\times$ in comparison to the benchmark schemes.
翻译:联邦学习能够从客户节点上的数据中培训一个全球模型,而没有数据共享和将客户数据移动到中央服务器。多接入边缘计算(MEC)网络中联邦化学习的绩效由于计算客户之间电力和通信链接质量的异质性和差异性波动而出现缓慢的趋同。我们提议了一个新型的编码计算框架,代码FedL,它将编码冗余结构构建成联邦化学习,以缓解累进器,加速培训程序。编码FedL为非线性基离子化学习提供编码计算,方法是通过随机的 Fourier 功能将分布式的计算内核嵌嵌嵌入,从而将培训任务转换成计算上有利的线性回归。此外,客户通过对本地数据集进行编码编码编码化,同时将编码编码组合成编码,从全球对等数据集中逐渐弥补了在培训过程中的渐变渐变的精度,从而加快了整合。为了在MEC服务器服务器上将分布式基底基底的离子存储时间,我们通过一个可移动的服务器上的数据流分析数据流,我们通过一个数据流数据流数据流到数据流数据流数据流数据流数据流的计算,我们通过在数据库中找到数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流数据流的计算到数据流。