Federated learning (FL) is a useful tool in distributed machine learning that utilizes users' local datasets in a privacy-preserving manner. When deploying FL in a constrained wireless environment; however, training models in a time-efficient manner can be a challenging task due to intermittent connectivity of devices, heterogeneous connection quality, and non-i.i.d. data. In this paper, we provide a novel convergence analysis of non-convex loss functions using FL on both i.i.d. and non-i.i.d. datasets with arbitrary device selection probabilities for each round. Then, using the derived convergence bound, we use stochastic optimization to develop a new client selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication time under a transmit power constraint. We find an analytical solution to the minimization problem. One key feature of the algorithm is that knowledge of the channel statistics is not required and only the instantaneous channel state information needs to be known. Using the FEMNIST and CIFAR-10 datasets, we show through simulations that the communication time can be significantly decreased using our algorithm, compared to uniformly random participation.
翻译:联邦学习( FL) 是分散的机器学习的有用工具, 以保密的方式利用用户的本地数据集。 当在受限制的无线环境中部署 FL时, 以时间效率高的方式培训模式可能是一项具有挑战性的任务, 原因是设备的间歇连接、 不同连接质量和非i. id. 数据。 在本文中, 我们用 FL 提供非 convex 损失函数的新型趋同分析, 使用 FL 来分析 i. d. 和非 i. id. id. 数据集, 带有任意选择每个回合的设备概率。 然后, 使用 衍生的趋同约束, 我们使用随机优化 和 CIFAR- 10 数据集, 我们通过模拟来显示, 通信时间可以大大缩短, 使用我们的算法, 比较一致的随机参与 。