Wireless federated learning (WFL) undergoes a communication bottleneck in uplink, limiting the number of users that can upload their local models in each global aggregation round. This paper presents a new multi-carrier non-orthogonal multiple-access (MC-NOMA)-empowered WFL system under an adaptive learning setting of Flexible Aggregation. Since a WFL round accommodates both local model training and uploading for each user, the use of Flexible Aggregation allows the users to train different numbers of iterations per round, adapting to their channel conditions and computing resources. The key idea is to use MC-NOMA to concurrently upload the local models of the users, thereby extending the local model training times of the users and increasing participating users. A new metric, namely, Weighted Global Proportion of Trained Mini-batches (WGPTM), is analytically established to measure the convergence of the new system. Another important aspect is that we maximize the WGPTM to harness the convergence of the new system by jointly optimizing the transmit powers and subchannel bandwidths. This nonconvex problem is converted equivalently to a tractable convex problem and solved efficiently using variable substitution and Cauchy's inequality. As corroborated experimentally using a convolutional neural network and an 18-layer residential network, the proposed MC-NOMA WFL can efficiently reduce communication delay, increase local model training times, and accelerate the convergence by over 40%, compared to its existing alternative.
翻译:无线联结学习(WFL)在上行中遇到通信瓶颈,限制了可在每轮全球聚合回合中上传其本地模型的用户数量。本文件在灵活聚合的适应性学习设置下展示了一个新的多载非横向多存多存(MC-NOMA)驱动的WFL系统。WFL回合既包括当地模式培训,又包括每个用户的上传,使用灵活聚合使用户能够每轮培训不同数目的迭接,适应其频道条件和计算资源。关键的想法是使用MC-NOMA同时上传用户的本地模型,从而扩大用户的本地模式培训时间,并增加参与用户。一个新的衡量标准,即经过加权全球培训的小型蝙蝠比例(WGPTM)用于测量新系统的趋同。另一个重要的一点是,我们通过联合优化传输能力和子带宽带宽带宽,使WGPMTM最大限度地利用新系统的趋同,从而降低传输能力和次链带带带带带带宽。这个非Convex的网络问题正在通过一个可变式的变式的网络向一个可变式的轨道,而可变式的网络将一个可变式的网络转化为一个可变式的轨道。