Edge federated learning (FL) is an emerging paradigm that trains a global parametric model from distributed datasets based on wireless communications. This paper proposes a unit-modulus over-the-air computation (UMAirComp) framework to facilitate efficient edge federated learning, which simultaneously uploads local model parameters and updates global model parameters via analog beamforming. The proposed framework avoids sophisticated baseband signal processing, leading to low communication delays and implementation costs. Training loss bounds of UMAirComp FL systems are derived and two low-complexity large-scale optimization algorithms, termed penalty alternating minimization (PAM) and accelerated gradient projection (AGP), are proposed to minimize the nonconvex nonsmooth loss bound. Simulation results show that the proposed UMAirComp framework with PAM algorithm achieves a smaller mean square error of model parameters' estimation, training loss, and test error compared with other benchmark schemes. Moreover, the proposed UMAirComp framework with AGP algorithm achieves satisfactory performance while reduces the computational complexity by orders of magnitude compared with existing optimization algorithms. Finally, we demonstrate the implementation of UMAirComp in a vehicle-to-everything autonomous driving simulation platform. It is found that autonomous driving tasks are more sensitive to model parameter errors than other tasks since the neural networks for autonomous driving contain sparser model parameters.
翻译:远距联合学习(FL)是一个新兴范例,它从无线通信的分布式数据集中培训一个全球参数模型。本文件提议了一个单位模量超空计算(UMAirComp)框架,以促进高效边缘联合学习,同时上传本地模型参数,并通过模拟光成形更新全球模型参数。拟议框架避免了复杂的基带信号处理,导致通信延迟和执行成本降低。UMAirComp FL系统的培训损失界限是衍生出来的,两个低兼容性大型优化计算算法,称为惩罚交替最小化(PAM)和加速梯度投影(AGP),目的是尽量减少非convex非移动损失约束。模拟结果显示,拟议的UMAirComp框架与其他基准计划相比,在模型估计、培训损失和测试错误方面实现了一个较小的中位平方差错误。此外,拟议的UMAircomcomp框架取得了令人满意的表现,同时通过数量级定序降低计算的复杂性,与现有的优化算法相比,与加速梯度预测(AGPAGMAR)的加速度投影模型相比,我们展示了自自动驱动模型以来的自动驱动系统任务包括了其他智能驱动模型。