Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy. Although various communication schemes have been proposed to expedite the FL process, most of them have assumed ideal wireless channels which provide reliable and lossless communication links between the server and mobile clients. Unfortunately, in practical systems with limited radio resources such as constraint on the training latency and constraints on the transmission power and bandwidth, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO). In this paper, we consider such non-ideal wireless channels, and carry out the first analysis showing that the FL convergence can be severely jeopardized by TO and QE, but intriguingly can be alleviated if the clients have uniform outage probabilities. These insightful results motivate us to propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability. Extensive experimental results are presented to show the superior performance of FedTOE for a deep learning-based classification task with transmission latency constraints.
翻译:联邦学习(FL)被认为是一个可行的分布式学习模式,在保护用户隐私的同时,与无线边缘的大规模移动设备合作,对机器学习模式进行了培训,在无线边缘与大规模移动设备合作,从而保护用户隐私。虽然提出了各种通信计划,以加快FL进程,但大多数都假定了理想的无线频道,为服务器和移动客户提供了可靠和无损失的通信联系。不幸的是,在无线电资源有限的实用系统中,如限制培训时间和对传输功率和带宽的限制,大量模型参数的传输不可避免地受到四分五裂错误(QE)和传输断流(TO)的影响。在本文件中,我们考虑了这种非理想的无线频道,并进行了初步分析,表明FL的趋同可能受到到和QE的严重损害,但如果客户的客户具有统一的概率,则令人感兴趣的是,如果客户具有统一的概率性能,那么这些深刻的结果会令人惊奇地减轻FTTO的传播能力。