In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints. By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.
翻译:在这项工作中,我们调查了在多种资源制约下对多种边缘设备按需联合学习(FL)这一具有挑战性的问题。我们提议了一个成本可调整的FL框架,名为AnycomcostFL,这个框架使各种边缘设备能够在广泛的效率制约下高效率地进行本地更新。为此,我们设计了模型缩小以支持具有弹性计算成本的本地模型培训,以及梯度压缩,以允许以动态通信间接费用传输参数。为了改进模型性能,以元素明智的方式进行了强化的参数汇总。我们侧重于AnycostFL,我们进一步提议了一项优化设计,以尽量减少具有个性化内衣和能源限制的全球培训损失。通过揭示对趋同分析的理论洞察力,为不同的设备推算出个性化培训战略,以匹配其本地可用资源。实验结果表明,与最先进的FL算法相比,我们的学习框架可以将培训时间减少到1.9倍,从而实现合理的全球测试精确度。此外,结果还表明,我们的方法大大改进了全球趋同的精确度。