We propose a novel communication design, termed random orthogonalization, for federated learning (FL) in a massive multiple-input and multiple-output (MIMO) wireless system. The key novelty of random orthogonalization comes from the tight coupling of FL and two unique characteristics of massive MIMO -- channel hardening and favorable propagation. As a result, random orthogonalization can achieve natural over-the-air model aggregation without requiring transmitter side channel state information (CSI) for the uplink phase of FL, while significantly reducing the channel estimation overhead at the receiver. We extend this principle to the downlink communication phase and develop a simple but highly effective model broadcast method for FL. We also relax the massive MIMO assumption by proposing an enhanced random orthogonalization design for both uplink and downlink FL communications, that does not rely on channel hardening or favorable propagation. Theoretical analyses with respect to both communication and machine learning performance are carried out. In particular, an explicit relationship among the convergence rate, the number of clients, and the number of antennas is established. Experimental results validate the effectiveness and efficiency of random orthogonalization for FL in massive MIMO.
翻译:我们提出一个新的通信设计,称为随机或分解,用于在大型多输入和多输出无线系统中进行联合学习(FL),随机或分解的关键新颖性来自FL的紧密结合和大型MIMO的两个独特特性 -- -- 频道硬化和有利的传播。因此,随机或分解可以实现天然的超空模型聚合,而不需要发报器侧端频道国家信息,同时大大降低接收器的频道估计间接费用。我们将这一原则扩大到下链通信阶段,并为FL开发一个简单而高效的示范广播方法。我们还放松了大规模IMO的假设,为上链和下链的FL通信提出了强化的随机或分解设计,这种设计并不依赖于频道硬化或有利的传播。对通信和机器学习性能进行理论分析。特别是,统一率、客户数量和天线数之间的明确关系已经建立。实验结果验证了FL的大规模随机或随机化的有效性和效率。