Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner. Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server. But the training efficiency is often throttled by challenges arising from limited communication and data heterogeneity. This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck. Additionally, we leverage a bi-level optimization framework to personalize the federated learning model so as to cope with the data heterogeneity issue. As a result, it enhances the generalization and robustness of each client's local model. We elaborate on the model training procedure and its advantages over conventional frameworks. We provide a convergence analysis that theoretically demonstrates the training efficiency. We also conduct extensive experiments to validate the efficacy of the proposed framework.
翻译:联邦边缘学习是一种很有希望的技术,可以在无线网络边缘以保护隐私的方式部署情报。在这种环境下,多个客户在边缘服务器的协调下合作培训了一个全球通用模型。但培训效率往往受到通信和数据差异性有限的挑战的干扰。本文介绍了一个分散的培训模式,采用模拟的空中计算来解决通信瓶颈问题。此外,我们利用一个双级优化框架,使联合学习模型个人化,以便应对数据差异性问题。因此,它加强了每个客户本地模型的普及性和稳健性。我们详细介绍了示范培训程序及其相对于常规框架的优势。我们从理论上展示了培训效率的趋同分析。我们还进行了广泛的实验,以验证拟议框架的功效。</s>