Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL, regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the MS-FEDL problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.
翻译:联邦学习是一种新的学习计划,用于合作培训,共同预测模型,同时在当地保存参与设备的数据。在本文中,我们研究了多接入边缘计算机服务器多联结学习服务的新模式。因此,必须考虑在每个移动设备中为当地培训进程共享CPU资源,并在移动设备中分配通信资源以交换学习信息。此外,不同学习服务的趋同性能取决于需要准确决定的超学习率参数。为此,我们提议了一个联合资源优化和超学习率控制问题,即移动设备能源消耗和总体学习时间的MS-FEDL。我们设计了一个基于块协调下行法的集中算法,以及用于解决MS-FEDL问题的分散化的JP-MIADMM算法。不同于集中化方法,分散式方法需要获得许多分流,但允许每个学习服务独立管理当地资源和学习进程,而不透露学习服务信息。我们的模拟结果显示了我们提议的算法的趋同性业绩以及我们提议的算法相对于超度战略的优异性性。