Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data. Despite its practical efficiency and effectiveness, the iterative on-device learning process (e.g., local computations and global communications with the server) incurs a considerable cost in terms of learning time and energy consumption, which depends crucially on the number of selected clients and the number of local iterations in each training round. In this paper, we analyze how to design adaptive FL in mobile edge networks that optimally chooses these essential control variables to minimize the total cost while ensuring convergence. We establish the analytical relationship between the total cost and the control variables with the convergence upper bound. To efficiently solve the cost minimization problem, we develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters. We derive important solution properties that effectively identify the design principles for different optimization metrics. Practically, we evaluate our theoretical results both in a simulated environment and on a hardware prototype. Experimental evidence verifies our derived properties and demonstrates that our proposed solution achieves near-optimal performance for different optimization metrics for various datasets and heterogeneous system and statistical settings.
翻译:联邦学习(FL)是一种分布式学习模式,它使大量移动设备能够在中央服务器的协调下,在不分享其原始数据的情况下,在一个中央服务器的协调下合作学习一个模型。尽管它具有实际的效率和有效性,但迭接的在设备上学习过程(例如,当地计算和与服务器的全球通信)在学习时间和能源消耗方面成本很高,这主要取决于每个培训回合中选定客户的数量和当地迭代数。在本文件中,我们分析如何在移动边缘网络中设计适应性FL,以便以最佳方式选择这些基本控制变量,以尽量减少总成本,同时确保汇合。我们建立了总成本与控制变量之间的分析关系,与趋同高度约束。为有效解决成本最小化问题,我们开发了低成本的基于抽样的算法,以学习与各种未知参数相关的趋同。我们获得了重要的解决方案属性,有效地确定了不同优化度度度度的设计原则。实际上,我们在模拟环境和硬件原型中评估了我们的理论结果。实验性证据证实了我们衍生的属性,并表明我们提议的解决方案在各种数据模型和模型化的模型性能上实现了近最佳度的模型化。