Applying Federated Learning (FL) on Internet-of-Things devices is necessitated by the large volumes of data they produce and growing concerns of data privacy. However, there are three challenges that need to be addressed to make FL efficient: (i) execute on devices with limited computational capabilities, (ii) account for stragglers due to computational heterogeneity of devices, and (iii) adapt to the changing network bandwidths. This paper presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges. FedAdapt accelerates local training in computationally constrained devices by leveraging layer offloading of deep neural networks (DNNs) to servers. Further, FedAdapt adopts reinforcement learning-based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth. Experimental studies are carried out on a lab-based testbed comprising five IoT devices. By offloading a DNN from the device to the server FedAdapt reduces the training time of a typical IoT device by over half compared to classic FL. The training time of extreme stragglers and the overall training time can be reduced by up to 57%. Furthermore, with changing network bandwidth, FedAdapt is demonstrated to reduce the training time by up to 40% when compared to classic FL, without sacrificing accuracy. FedAdapt can be downloaded from https://github.com/qub-blesson/FedAdapt.
翻译:在互联网上应用联邦学习设备(FL)是必要的,因为其产生的数据数量庞大,而且对数据隐私的关注日益增加。然而,为了提高FL的效率,需要应对三大挑战:(一) 在计算能力有限的装置上执行,(二) 说明由于设备计算异质性导致的分解器,以及(三) 适应不断变化的网络带宽。本文件展示了FedAdapt,一个适应性卸载FL框架,以减轻上述挑战。FedAdapt通过利用深神经网络(DNNS)的层卸载,加快了计算限制装置的本地培训。此外,FedAdapt采用强化学习优化和组合,以便适应性地确定每个设备应卸载到服务器的分层,以应对计算异性功能性和网络带宽度的挑战。在由5 IoT 设备组成的实验室测试台进行实验研究。通过将DNNE从一个设备从层卸载到服务器上卸载深神经网络(DAdadaddaldald), 将FDA 一半的培训时间降为FDL。