This paper studies the joint device selection and power control scheme for wireless federated learning (FL), considering both the downlink and uplink communications between the parameter server (PS) and the terminal devices. In each round of model training, the PS first broadcasts the global model to the terminal devices in an analog fashion, and then the terminal devices perform local training and upload the updated model parameters to the PS via over-the-air computation (AirComp). First, we propose an AirComp-based adaptive reweighing scheme for the aggregation of local updated models, where the model aggregation weights are directly determined by the uplink transmit power values of the selected devices and which enables the joint learning and communication optimization simply by the device selection and power control. Furthermore, we provide a convergence analysis for the proposed wireless FL algorithm and the upper bound on the expected optimality gap between the expected and optimal global loss values is derived. With instantaneous channel state information (CSI), we formulate the optimality gap minimization problems under both the individual and sum uplink transmit power constraints, respectively, which are shown to be solved by the semidefinite programming (SDR) technique. Numerical results reveal that our proposed wireless FL algorithm achieves close to the best performance by using the ideal FedAvg scheme with error-free model exchange and full device participation.
翻译:本文研究无线联合联合学习(FL)的联合设备选择和动力控制方案,同时考虑到参数服务器(PS)和终端装置之间的下链接和上链接通信。在每轮示范培训中,PS首先以模拟方式向终端设备发送全球模型,然后终端设备进行本地培训,并通过空外计算将更新的模型参数上传到PS(AirComp)。首先,我们提议一个基于AirComp的适应性调整再连接计划,以汇总当地更新模型,模型集重权由选定设备的上链接传输功率值直接确定,并仅通过设备选择和电控使联合学习和通信优化。此外,我们对拟议的无线FL算法和预期最佳全球损失值之间预期最佳性差距的上限进行了趋同分析。我们用即时频道状态信息来提出最佳性差距最小化差距最小化的本地传输功率限制,这分别由选定设备的上端传输功率值传输功率值直接确定,并且仅通过设备选择和电源控制进行联合学习和通信优化。 Numalavalal 计划显示,我们用最接近的Fleval-Fleval 方法实现最接近的Frealalalalal-assal 的性能率。