Model-free techniques, such as machine learning (ML), have recently attracted much interest for physical layer design, e.g., symbol detection, channel estimation and beamforming. Most of these ML techniques employ centralized learning (CL) schemes and assume the availability of datasets at a parameter server (PS), demanding the transmission of data from the edge devices, such as mobile phones, to the PS. Exploiting the data generated at the edge, federated learning (FL) has been proposed recently as a distributed learning scheme, in which each device computes the model parameters and sends them to the PS for model aggregation, while the datasets are kept intact at the edge. Thus, FL is more communication-efficient and privacy-preserving than CL and applicable to the wireless communication scenarios, wherein the data are generated at the edge devices. This article discusses the recent advances in FL-based training for physical layer design problems, and identifies the related design challenges along with possible solutions to improve the performance in terms of communication overhead, model/data/hardware complexity.
翻译:无模型技术,例如机器学习(ML)最近吸引了对物理层设计的兴趣,例如符号探测、频道估计和波束成形等,这些ML技术大多采用集中学习(CL)办法,假设参数服务器(PS)有数据集,要求从边缘设备(如移动电话)向PS传输数据。 利用边缘生成的数据,最近提议采用联邦学习(FL)作为分布式学习办法,其中每个设备计算模型参数并将其发送给PS,用于模型汇总,而数据集则保持在边缘,因此,FL比CL更具有通信效率和隐私保护,适用于无线通信情景,在边缘设备生成数据。文章讨论了基于FL的物理层设计问题培训的最新进展,并确定了相关的设计挑战,以及改进通信间接费用、模型/数据/硬件复杂性方面业绩的可能解决办法。