This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN) architectures. FL preserves data privacy by exchanging the locally trained models of mobile devices. By adopting SNNs as local models, FL can flexibly cope with the time-varying energy capacities of mobile devices. Combining FL and SNNs is however non-trivial, particularly under wireless connections with time-varying channel conditions. Furthermore, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, so are ill-suited to FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple width configurations that are decoded as many as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations, while avoiding the inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also can counteract non-IID data distributions and poor channel conditions, which is also corroborated by simulations.
翻译:本文旨在整合两种协同技术,即联合学习(FL)和宽度可调整的微薄神经网络(SNN)架构。 FL通过交流当地培训的移动设备模型来维护数据隐私。通过采用SNN(SlimFL),FL可以灵活地应对移动设备的时间变化能量能力。将FL和SNN(SN)组合起来,特别是无线连接与时间变化频道条件的无线连接,无论如何都是非边际的。此外,现有的多宽度 SNN培训算法对各设备的数据分布十分敏感,因此不适合FL。受此驱动,我们提议采用通信和节能的SNNNF(SlimFL)(以SlimFL)(以SlFL)(以SlimFL)(节能为名)模式的FL(SlimFL)模式,联合使用超定位编码编码(SC)来更新本地模型。 SlimL(SlFL)交换多种宽度配置的超度配置的超度配置,这种超度配置可以被尽可能解解解解解解解的通信。 ST、SlimexlFLLL(nal)也不适合FL(FL)不易操作的FL(S-lFLI)的正反向前传播,在S-FLFLFD(S-FLD)配置期间也只能反向后显示S-flFLFLD(S-rolFD)的逆向前传递。