This paper proposes using communication pipelining to enhance the wireless spectrum utilization efficiency and convergence speed of federated learning in mobile edge computing applications. Due to limited wireless sub-channels, a subset of the total clients is scheduled in each iteration of federated learning algorithms. On the other hand, the scheduled clients wait for the slowest client to finish its computation. We propose to first cluster the clients based on the time they need per iteration to compute the local gradients of the federated learning model. Then, we schedule a mixture of clients from all clusters to send their local updates in a pipelined manner. In this way, instead of just waiting for the slower clients to finish their computation, more clients can participate in each iteration. While the time duration of a single iteration does not change, the proposed method can significantly reduce the number of required iterations to achieve a target accuracy. We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution. We also provide numerical results to demonstrate the gains of the proposed method for different datasets and deep learning architectures.
翻译:本文建议使用通信管线来提高移动边缘计算应用程序中的无线频谱利用效率和联合学习趋同速度。 由于无线子通道有限, 将全部客户的一个子集排在联盟学习算法的每个迭代中。 另一方面, 排定的客户等待最慢的客户完成计算。 我们提议首先根据每迭代所需的时间对客户进行分组, 以计算联合学习模型的本地梯度。 然后, 我们安排所有组群的客户混合, 以编程方式发送本地最新消息 。 这样, 更多的客户可以参加每个较慢的客户完成计算 。 虽然单一迭代的时间不会改变, 提议的方法可以大幅降低所需的迭代数, 以达到目标准确性 。 我们为在不同环境下的最佳客户组合提供了通用的配方, 我们从分析中得出一个高效的算法, 以获得最佳解决方案 。 我们还提供数字结果, 以展示不同数据集和深层学习架构的拟议方法的成果 。