With its privacy preservation and communication efficiency, federated learning (FL) has emerged as a learning framework that suits beyond 5G and towards 6G systems. This work looks into a future scenario in which there are multiple groups with different learning purposes and participating in different FL processes. We give energy-efficient solutions to demonstrate that this scenario can be realistic. First, to ensure a stable operation of multiple FL processes over wireless channels, we propose to use a massive multiple-input multiple-output network to support the local and global FL training updates, and let the iterations of these FL processes be executed within the same large-scale coherence time. Then, we develop asynchronous and synchronous transmission protocols where these iterations are asynchronously and synchronously executed, respectively, using the downlink unicasting and conventional uplink transmission schemes. Zero-forcing processing is utilized for both uplink and downlink transmissions. Finally, we propose an algorithm that optimally allocates power and computation resources to save energy at both base station and user sides, while guaranteeing a given maximum execution time threshold of each FL iteration. Compared to the baseline schemes, the proposed algorithm significantly reduces the energy consumption, especially when the number of base station antennas is large.
翻译:随着隐私保护和通信效率的保持和通信效率,联合会学习(FL)已成为一个适合5G和6G系统以外的学习框架。这项工作审视了未来的情况,其中多个群体具有不同的学习目的并参与不同的FL进程。我们提供了节能解决方案,以证明这种情景是现实的。首先,为了确保多种FL流程在无线频道上的稳定运行,我们提议使用一个大型多投入多产出多产出网络来支持本地和全球FL培训更新,并让这些FL流程的迭代在同一个大型一致性时间里执行。然后,我们开发一个无同步同步同步和同步的传输协议,其中这些迭代分别是无同步和同步地执行的。我们用下行链连接、无线和常规上链传输计划来证明这一情景。我们提议使用一个大规模多投入的多产出网络来支持本地和全球FL培训更新,并让这些FL流程的迭代数在同一个大型协调时间段内执行。我们提出一个算法,以最优化地分配电力和计算资源,以节省基地站和用户两侧的能源,同时保证给定出最高执行时间的FLiral计划,特别是拟议的大型的基级算算。将大大降低每个FLiral。