We consider federated edge learning (FEEL) over wireless fading channels taking into account the downlink and uplink channel latencies, and the random computation delays at the clients. We speed up the training process by overlapping the communication with computation. With fountain coded transmission of the global model update, clients receive the global model asynchronously, and start performing local computations right away. Then, we propose a dynamic client scheduling policy, called MRTP, for uploading local model updates to the parameter server (PS), which, at any time, schedules the client with the minimum remaining upload time. However, MRTP can lead to biased participation of clients in the update process, resulting in performance degradation in non-iid data scenarios. To overcome this, we propose two alternative schemes with fairness considerations, termed as age-aware MRTP (A-MRTP), and opportunistically fair MRTP (OF-MRTP). In A-MRTP, the remaining clients are scheduled according to the ratio between their remaining transmission time and the update age, while in OF-MRTP, the selection mechanism utilizes the long term average channel rate of the clients to further reduce the latency while ensuring fair participation of the clients. It is shown through numerical simulations that OF-MRTP provides significant reduction in latency without sacrificing test accuracy.
翻译:我们考虑到下链接和上链接频道的延迟时间以及客户的随机计算延误,将无线淡化频道的边际学习(FEEL)视为对无线淡化频道的联盟式边际学习(FEEL),我们考虑到下链接和上链接频道的延迟时间以及客户的随机计算延误。我们加快了培训进程,把通信与计算重叠。随着全球模型更新的喷泉编码传输,客户会收到全球模型的无同步状态,并立即开始进行本地计算。然后,我们提出了一个动态客户日程安排政策,称为MRTP(MRTP),将本地模型更新的更新内容上传到参数服务器(PS)上,该服务器随时将客户的时间排在最短的上传时间里。然而,MRTP可能导致客户在更新过程中有偏向偏向,从而导致非二位数据情景中的性能退化。为了克服这一点,我们提出了两种带有公平考虑的替代方案,即年龄认知MRTTP(A-MRTTP)和机会公平的MRTTP(O-MTP)之间比例,其余客户根据剩余传输时间与更新年龄之间的比,安排,同时选择机制利用客户的长期平均频道率,进一步降低其模拟中展示。