We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network. A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data. The devices then transmit their local model updates, which are used to update the global model, to the PS. The algorithm, which involves transmission over PS-to-device and device-to-PS links, continues until the convergence of the global model or lack of any participating devices. In this study, we consider device selection based on downlink channels over which the PS shares the global model with the devices. Performing digital downlink transmission, we design a partial device participation framework where a subset of the devices is selected for training at each iteration. Therefore, the participating devices can have a better estimate of the global model compared to the full device participation case which is due to the shared nature of the broadcast channel with the price of updating the global model with respect to a smaller set of data. At each iteration, the PS broadcasts different quantized global model updates to different participating devices based on the last global model estimates available at the devices. We investigate the best number of participating devices through experimental results for image classification using the MNIST dataset with biased distribution.
翻译:我们研究Federate 边际学习,在无线网络边缘对一个全球模型进行协作培训,使用对隐私敏感的数据进行协作培训。参数服务器(PS)跟踪全球模型,并与无线边端设备共享,使用其私人本地数据进行培训。然后,这些设备将本地模型更新,用于更新全球模型,并提供给PS。这种算法涉及通过PS到装置和装置到PS连接传输,一直持续到全球模型或缺少任何参与设备时为止。在这个研究中,我们考虑根据PS与该设备共享全球模型的下链路选择设备。进行数字下链路传输,我们设计一个部分设备参与框架,其中选择了一组设备在每次循环中培训。因此,参与装置可以对全球模型进行更好的估计,而由于广播频道具有共同性质,更新全球模型的价格与较小的数据集有关。在每次测试中,PSS将不同的全球模型向不同的参与设备更新,通过最新的全球数据分类,我们使用最佳的模型,用最佳的模型来调查使用最佳的模型,用于参与的图像。