Large-scale deployments of low Earth orbit (LEO) satellites collect massive amount of Earth imageries and sensor data, which can empower machine learning (ML) to address global challenges such as real-time disaster navigation and mitigation. However, it is often infeasible to download all the high-resolution images and train these ML models on the ground because of limited downlink bandwidth, sparse connectivity, and regularization constraints on the imagery resolution. To address these challenges, we leverage Federated Learning (FL), where ground stations and satellites collaboratively train a global ML model without sharing the captured images on the satellites. We show fundamental challenges in applying existing FL algorithms among satellites and ground stations, and we formulate an optimization problem which captures a unique trade-off between staleness and idleness. We propose a novel FL framework, named FedSpace, which dynamically schedules model aggregation based on the deterministic and time-varying connectivity according to satellite orbits. Extensive numerical evaluations based on real-world satellite images and satellite networks show that FedSpace reduces the training time by 1.7 days (38.6%) over the state-of-the-art FL algorithms.
翻译:大规模部署低地球轨道(LEO)卫星收集了大量地球图像和感应数据,使机器学习能够应对实时灾害导航和减灾等全球挑战,然而,由于低地球轨道带宽有限、连通性少和图像分辨率的正规化限制,通常无法下载所有高分辨率图像和在地面培训这些ML模型。为了应对这些挑战,我们利用Fed Learning(FL),即地面站和卫星协作培训全球ML模型,而不在卫星上分享捕获的图像。我们在卫星和地面站之间应用现有FL算法方面存在根本性挑战,我们提出了优化问题,以捕捉失灵和闲状态之间的独特权衡。我们提出了一个名为FedSpace的新型FL框架,根据卫星轨道的确定性和时间变化的连通性模型群集。基于实时卫星图像和卫星网络的广泛数字评估显示,FedSpace公司将培训时间减少1.7天(38.6%),超过先进的FLL算法。