Model-free techniques, such as machine learning (ML), have recently attracted much interest towards the physical layer design, e.g., symbol detection, channel estimation, and beamforming. Most of these ML techniques employ centralized learning (CL) schemes and assume the availability of datasets at a parameter server (PS), demanding the transmission of data from edge devices, such as mobile phones, to the PS. Exploiting the data generated at the edge, federated learning (FL) has been proposed recently as a distributed learning scheme, in which each device computes the model parameters and sends them to the PS for model aggregation while the datasets are kept intact at the edge. Thus, FL is more communication-efficient and privacy-preserving than CL and applicable to the wireless communication scenarios, wherein the data are generated at the edge devices. This article presents the recent advances in FL-based training for physical layer design problems. Compared to CL, the effectiveness of FL is presented in terms of communication overhead with a slight performance loss in the learning accuracy. The design challenges, such as model, data, and hardware complexity, are also discussed in detail along with possible solutions.
翻译:无模型技术,例如机器学习(ML)最近引起了对物理层设计的兴趣,例如符号探测、频道估计和波束成形等,这些ML技术大多采用集中学习(CL)办法,假设参数服务器(PS)有数据集,要求将诸如移动电话等边端设备的数据传输到PS。利用边缘产生的数据,最近提议采用联邦学习(FL)作为分布式学习办法,其中每个设备计算模型参数并将其发送给PS,用于模型集成,而数据集在边缘保持完好。因此,FL比CL更具有通信效率和保密性,适用于无线通信情景,在边缘设备生成数据。本文章介绍了基于FL的物理层设计问题培训的最新进展。与CL相比,FL的有效性在通信间接费用方面,在学习准确性方面略微丧失。还详细讨论了设计挑战,例如模型、数据和硬件复杂性,并尽可能详细讨论了解决方案。