Federated learning (FL) is a popular framework for training an AI model using distributed mobile data in a wireless network. It features data parallelism by distributing the learning task to multiple edge devices while attempting to preserve their local-data privacy. One main challenge confronting practical FL is that resource constrained devices struggle with the computation intensive task of updating of a deep-neural network model. To tackle the challenge, in this paper, a federated dropout (FedDrop) scheme is proposed building on the classic dropout scheme for random model pruning. Specifically, in each iteration of the FL algorithm, several subnets are independently generated from the global model at the server using dropout but with heterogeneous dropout rates (i.e., parameter-pruning probabilities),each of which is adapted to the state of an assigned channel. The subnets are downloaded to associated devices for updating. Thereby, FedDrop reduces both the communication overhead and devices' computation loads compared with the conventional FL while outperforming the latter in the case of overfitting and also the FL scheme with uniform dropout (i.e., identical subnets).
翻译:联邦学习(FL)是利用无线网络中分布式移动数据培训AI模型的流行框架,它通过将学习任务分配给多个边缘设备,同时试图保护其本地数据隐私,具有数据平行性。实用FL面临的一个主要挑战是,资源限制装置与深神经网络模型更新的计算密集任务纠缠不休。为了应对这一挑战,本文建议采用联邦辍学(FedDrop)计划,它建立在典型的随机模型裁剪的典型辍学计划之上。具体地说,在FL算法的每一次迭代中,若干子网都是从服务器的全球模型中独立生成的,使用的是辍学率,但退学率各异(即参数运行概率),每个系统都适应指定频道的状态。子网被下载到相关更新装置上。因此,FedDrop与常规FL相比,通信间接费用和装置计算负荷都减少了,而后者则表现优于统一辍学(i.e.dropnet)。