Federated learning (FL) is a popular framework for training an AI model using distributed mobile data in a wireless network. It features data parallelism by distributing the learning task to multiple edge devices while attempting to preserve their local-data privacy. One main challenge confronting practical FL is that resource constrained devices struggle with the computation intensive task of updating of a deep-neural network model. To tackle the challenge, in this paper, a federated dropout (FedDrop) scheme is proposed building on the classic dropout scheme for random model pruning. Specifically, in each iteration of the FL algorithm, several subnets are independently generated from the global model at the server using dropout but with heterogeneous dropout rates (i.e., parameter-pruning probabilities), each of which is adapted to the state of an assigned channel. The subsets are downloaded to associated devices for updating. Thereby, FdeDrop reduces both the communication overhead and devices' computation loads compared with the conventional FL while outperforming the latter in the case of overfitting and also the FL scheme with uniform dropout (i.e., identical subsets).
翻译:联邦学习(FL)是利用无线网络中分布式移动数据培训AI模型的流行框架,它通过将学习任务分配给多个边缘设备,同时试图保护其本地数据隐私,具有数据平行性。实用FL面临的一个主要挑战是,资源限制装置与深神经网络模型更新的计算密集任务挣扎。为了应对这一挑战,本文建议采用联邦辍学(FedDrop)计划,它建立在典型的随机模型剪裁的辍学方案之上。具体地说,在FL算法的每一次迭代中,几个子网都是从服务器的全球模型中独立生成的,使用的是辍学率,但退学率各异(即参数运行概率),每个子集被下载到相关更新装置上。因此,FdeDrop与常规FL相比,减少了通信间接费用和装置计算负荷,同时比常规FL计算负负负负值高,而后者表现优于统一辍学率(i.e,同子组)。