Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devices' distributed computing capabilities. However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations.
翻译:联邦学习(FL)是高效通信和计算的关键推进器,利用设备分布式计算能力。然而,由于本地设备的不同能源、无线频道条件以及非独立和同样分布的(非IID)数据分布,实际应用FL具有挑战性。为了应对这些问题,本文件提出一个新的学习框架,将FL和宽、可宽、可调整的微薄神经网络结合起来。将FL与SNN(SNN)结合起来具有挑战性,因为时间变化的频道条件和数据流分配具有挑战性。此外,现有的多维SNN培训算法对各设备的数据分布十分敏感,这使得SNNN不适合FL(非II)独立和相同的分布(非II)数据分布。 在SlimFL(SlimFL)的配置中,联合使用超高定位连接(SC)来更新本地模型。通过应用SSC、SlimL(SlF)交换多维度配置的超级配置,但解码化的多维度配置算式配置算法对多时间的配置十分敏感,使得SNNNL(SLLL)无法进行正式的升级,同时将SLFLFL(S-S-L)数据流流流流流流流流流流流流流流数据流流流流数据流的流数据进行正式整合。