We propose cooperative edge-assisted dynamic federated learning (CE-FL). CE-FL introduces a distributed machine learning (ML) architecture, where data collection is carried out at the end devices, while the model training is conducted cooperatively at the end devices and the edge servers, enabled via data offloading from the end devices to the edge servers through base stations. CE-FL also introduces floating aggregation point, where the local models generated at the devices and the servers are aggregated at an edge server, which varies from one model training round to another to cope with the network evolution in terms of data distribution and users' mobility. CE-FL considers the heterogeneity of network elements in terms of communication/computation models and the proximity to one another. CE-FL further presumes a dynamic environment with online variation of data at the network devices which causes a drift at the ML model performance. We model the processes taken during CE-FL, and conduct analytical convergence analysis of its ML model training. We then formulate network-aware CE-FL which aims to adaptively optimize all the network elements via tuning their contribution to the learning process, which turns out to be a non-convex mixed integer problem. Motivated by the large scale of the system, we propose a distributed optimization solver to break down the computation of the solution across the network elements. We finally demonstrate the effectiveness of our framework with the data collected from a real-world testbed.
翻译:我们建议合作边缘辅助动态联合学习(CE-FL)。CE-FL引入一个分布式机器学习(ML)架构,在终端设备中收集数据,而模型培训则在终端设备和边缘服务器上合作进行,通过从终端设备卸载数据到边端服务器,通过基地站使数据从端设备卸载到边缘服务器。CE-FL还引入浮动汇总点,在边缘服务器上将设备生成的本地模型和服务器汇总到一个边端服务器上,这些模型从一个模型到另一个模型,以适应网络在数据分配和用户流动性方面的演变。 CE-FL考虑网络要素在通信/合成模型和彼此相近方面的多样性。CE-FL进一步设想一个动态环境,通过网络设备从终端设备从端设备卸载数据到边端服务器进行在线变化,从而导致MLML模型性能性能的漂移。我们模拟了在屏幕上生成的流程,并对MLML模式培训进行了分析性趋同分析性分析。我们随后制定了网络意识CE-FL,目的是通过调整其网络要素的适应性优化所有网络要素,通过调整其通信/截图解其真实效率模型,然后将模型转换成一个模型的模型的模型的模型的模型,从我们向整个的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型的模型演示。