Federated learning (FL) has recently emerged as an attractive decentralized solution for wireless networks to collaboratively train a shared model while keeping data localized. As a general approach, existing FL methods tend to assume perfect knowledge of the Channel State Information (CSI) during the training phase, which may not be easy to acquire in case of fast fading channels. Moreover, literature analyses either consider a fixed number of clients participating in the training of the federated model, or simply assume that all clients operate at the maximum achievable rate to transmit model data. In this paper, we fill these gaps by proposing a training process that takes channel statistics as a bias to minimize the convergence time under imperfect CSI. Numerical experiments demonstrate that it is possible to reduce the training time by neglecting model updates from clients that cannot sustain a minimum predefined transmission rate. We also examine the trade-off between number of clients involved in the training process and model accuracy as a function of different fading regimes.
翻译:联邦学习(FL)最近成为无线网络合作培训共享模式、同时保持数据本地化的一个吸引人的分散化解决方案。作为一般做法,现有的FL方法往往假定在培训阶段对频道国家信息(CSI)有完全的了解,而这种了解在快速消退的渠道中可能不容易获得。此外,文献分析要么考虑到参加联合会模式培训的固定客户人数,要么仅仅假设所有客户都以可实现的最大速度运作,以传输模型数据。在本文中,我们建议开展一个培训进程,将频道统计视为一种偏差,以尽量减少不完善的CSI下趋同时间。数字实验表明,通过忽视无法维持最低预先确定的传输率的客户的模型更新,有可能减少培训时间。我们还审查了参与培训过程的客户人数之间的权衡和模型准确性,这是不同法制度的一种功能。