Low Earth Orbit (LEO) satellite constellations have seen a surge in deployment over the past few years by virtue of their ability to provide broadband Internet access as well as to collect vast amounts of Earth observational data that can be utilised to develop AI on a global scale. As traditional machine learning (ML) approaches which train a model by downloading satellite data to a ground station (GS) is not practical, Federated Learning (FL) offers a potential solution. However, existing FL approaches cannot be readily used because of excessively prolonged training time and unreliable satellite-GS communication channels. In this paper, we propose FedHAP by introducing high-altitude platforms (HAPs) as distributed parameter servers (PSs) into FL for Satcom (or more concretely LEO constellations), to achieve fast and efficient model training. FedHAP consists of three components: 1) a layered communication topology, 2) a model propagation algorithm, and 3) a model aggregation algorithm. Our extensive simulations demonstrate that FedHAP significantly accelerates FL model convergence as compared to state-of-the-art baselines, cutting the training time from several days down to a few hours yet achieving higher accuracy.
翻译:近些年来,低地球轨道卫星星座由于能够提供宽带互联网接入和收集大量地球观测数据,可用于在全球范围开发AI,因此部署量激增。由于传统的机器学习(ML)方法,通过将卫星数据下载到地面站(GS)来培训模型是不实际的,联邦学习(FL)提供了潜在的解决办法。然而,现有的FL方法由于培训时间过长和卫星-GS通信渠道不可靠,因此无法轻易使用。在本文件中,我们提议FedHAP将高高度平台作为分布参数服务器(PS)引入Satcom(或更具体的低地轨道星座)的FL,以实现快速有效的示范培训。FedHAP由三个部分组成:1) 层通信学,2) 模型传播算法,3) 模型组合算法。我们的广泛模拟表明,FDHAP大大加快了FL模型与最新基线的趋同速度,将培训时间从数天缩短到几小时,达到更高的精确度。