In this paper, we investigate the communication designs of over-the-air computation (AirComp) empowered federated learning (FL) systems considering uplink model aggregation and downlink model dissemination jointly. We first derive an upper bound on the expected difference between the training loss and the optimal loss, which reveals that optimizing the FL performance is equivalent to minimizing the distortion in the received global gradient vector at each edge node. As such, we jointly optimize each edge node transmit and receive equalization coefficients along with the edge server forwarding matrix to minimize the maximum gradient distortion across all edge nodes. We further utilize the MNIST dataset to evaluate the performance of the considered FL system in the context of the handwritten digit recognition task. Experiment results show that deploying multiple antennas at the edge server significantly reduces the distortion in the received global gradient vector, leading to a notable improvement in recognition accuracy compared to the single antenna case.
翻译:暂无翻译