Federated Learning (FL) is an intriguing distributed machine learning approach due to its privacy-preserving characteristics. To balance the trade-off between energy and execution latency, and thus accommodate different demands and application scenarios, we formulate an optimization problem to minimize a weighted sum of total energy consumption and completion time through two weight parameters. The optimization variables include bandwidth, transmission power and CPU frequency of each device in the FL system, where all devices are linked to a base station and train a global model collaboratively. Through decomposing the non-convex optimization problem into two subproblems, we devise a resource allocation algorithm to determine the bandwidth allocation, transmission power, and CPU frequency for each participating device. We further present the convergence analysis and computational complexity of the proposed algorithm. Numerical results show that our proposed algorithm not only has better performance at different weight parameters (i.e., different demands) but also outperforms the state of the art.
翻译:联邦学习(FL)是一种有趣的分布式机器学习方法,因为它具有保护隐私的特点。为了平衡能源与执行间隔之间的平衡,从而适应不同的需求和应用设想,我们提出了一个优化问题,以便通过两个加权参数,将能源消耗总量和完成时间的加权总和最小化。优化变量包括FL系统中每个装置的带宽、传输功率和CPU频率,所有装置都与一个基站相连,并合作训练一个全球模型。通过将非电文优化问题分解为两个子问题,我们设计了一个资源分配算法,以确定每个参与装置的带宽分配、传输功率和CPU频率。我们进一步介绍了拟议算法的趋同分析和计算复杂性。数字结果显示,我们拟议的算法不仅在不同重量参数(即不同需求)上表现更好,而且超越了艺术状态。