Federated learning (FL) has emerged as a key technique for distributed machine learning (ML). Most literature on FL has focused on ML model training for (i) a single task/model, with (ii) a synchronous scheme for uplink/downlink transfer of model parameters, and (iii) a static data distribution setting across devices. These assumptions are often not well representative of conditions encountered in practical FL environments. To address this, we develop DMA-FL, which considers dynamic FL with multiple downstream tasks to be trained over an asynchronous model transmission architecture. We first characterize the convergence of ML model training under DMA-FL via introducing a family of scheduling tensors and rectangular functions to capture the scheduling of devices. Our convergence analysis sheds light on the impact of resource allocation, device scheduling, and individual model states on the performance of ML models. We then formulate a non-convex mixed integer optimization problem for jointly configuring the resource allocation and device scheduling to strike an efficient trade-off between energy consumption and ML performance. We develop a solution methodology employing successive convex approximations with convergence guarantee to a stationary point. Through numerical simulations, we reveal the advantages of DMA-FL in terms of model performance and network resource savings.
翻译:暂无翻译