A new machine learning (ML) technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process. FL not only reduces the communication needs but also helps to protect the local privacy. Although FL has these advantages, it can still experience large communication latency when there are massive edge devices connected to the central parameter server (PS) and/or millions of model parameters involved in the learning process. Over-the-air computation (AirComp) is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation. To achieve good performance in FL through AirComp, user scheduling plays a critical role. In this paper, we investigate and compare different user scheduling policies, which are based on various criteria such as wireless channel conditions and the significance of model updates. Receiver beamforming is applied to minimize the mean-square-error (MSE) of the distortion of function aggregation result via AirComp. Simulation results show that scheduling based on the significance of model updates has smaller fluctuations in the training process while scheduling based on channel condition has the advantage on energy efficiency.
翻译:称为联合学习(FL)的新机器学习(ML)技术旨在保存边缘设备的数据,并只在学习过程中交换ML模型参数。 FL不仅减少通信需求,而且有助于保护当地隐私。虽然FL有这些优势,但如果有与中央参数服务器(PS)连接的大型边缘装置和/或学习过程中涉及的数百万个模型参数,它仍然可以经历大量的通信潜伏期。在空中计算(AirComp)时,允许多个设备使用模拟调制同时发送数据,从而可以计算数据。为了通过AirComp在FL实现良好性能,用户的时间安排起着关键作用。在本文中,我们根据无线频道条件和模型更新的重要性等各种标准,调查和比较不同的用户时间安排政策。接收光束用于通过AirComp公司尽量减少函数汇总结果的中值偏差(MSE)。模拟结果显示,基于模型更新重要性的时间安排在培训过程中波动较小,而基于频道条件的时间安排则具有能源效率优势。