Applying Federated Learning (FL) on Internet-of-Things devices is necessitated by the large volumes of data they produce and growing concerns of data privacy. However, there are three challenges that need to be addressed to make FL efficient: (i) execution on devices with limited computational capabilities, (ii) accounting for stragglers due to computational heterogeneity of devices, and (iii) adaptation to the changing network bandwidths. This paper presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges. FedAdapt accelerates local training in computationally constrained devices by leveraging layer offloading of deep neural networks (DNNs) to servers. Further, FedAdapt adopts reinforcement learning based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth. Experimental studies are carried out on a lab-based testbed and it is demonstrated that by offloading a DNN from the device to the server FedAdapt reduces the training time of a typical IoT device by over half compared to classic FL. The training time of extreme stragglers and the overall training time can be reduced by up to 57%. Furthermore, with changing network bandwidth, FedAdapt is demonstrated to reduce the training time by up to 40% when compared to classic FL, without sacrificing accuracy.
翻译:在互联网上应用联邦学习设备(FL)是必要的,因为其产生的大量数据数量巨大,而且对数据隐私的关注越来越多,因此需要在互联网上应用联邦学习(FL) 。然而,为了提高FL的效率,需要应对三大挑战:(一) 在计算能力有限的装置上执行,(二) 计算设备不异质性导致的分解器会计,以及(三) 适应不断变化的网络带宽。本文件介绍了FedAdapt,一个适应性卸载FL框架,以减轻上述挑战。FedAdapt通过利用层卸载深神经网络(DNNN)到服务器,加速计算精确度装置的当地培训。此外,FedAdapt采用基于优化和组合的强化学习,以便适应性地确定每个设备应卸载到服务器的分层,以应对计算异性功能和改变网络带宽的挑战。实验研究在实验室测试床进行,通过将DDAdadapaptality网络卸载40个DNNFN,通过将培训从40个服务器卸载到FDAdadaptrol化半个时间来减少典型IT的训练时间,通过Srealalalalalalalalalaltoxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx