Low Earth Obit (LEO) satellite constellations have seen a sharp increase of deployment in recent years, due to their distinctive capabilities of providing broadband Internet access and enabling global data acquisition as well as large-scale AI applications. To apply machine learning (ML) in such applications, the traditional way of downloading satellite data such as imagery to a ground station (GS) and then training a model in a centralized manner, is not desirable because of the limited bandwidth, intermittent connectivity between satellites and the GS, and privacy concerns on transmitting raw data. Federated Learning (FL) as an emerging communication and computing paradigm provides a potentially supreme solution to this problem. However, we show that existing FL solutions do not fit well in such LEO constellation scenarios because of significant challenges such as excessive convergence delay and unreliable wireless channels. To this end, we propose to introduce high-altitude platforms (HAPs) as distributed parameter servers (PSs) and propose a synchronous FL algorithm, FedHAP, to accomplish model training in an efficient manner via inter-satellite collaboration. To accelerate convergence, we also propose a layered communication scheme between satellites and HAPs that FedHAP leverages. Our simulations demonstrate that FedHAP attains model convergence in much fewer communication rounds than benchmarks, cutting the training time substantially from several days down to a few hours with the same level of resulting accuracy.
翻译:近年来,低地奥比特(LEO)卫星星座的部署量急剧增加,这是因为它们具有提供宽带互联网接入和促成全球数据获取以及大规模AI应用的独特能力,因此近年来部署量急剧增加。为了在应用这些应用中应用机器学习(ML),传统的方式下载卫星数据,如图像到地面站(GS),然后以集中方式培训模型,由于带宽有限,卫星与GS之间的间歇性连接,以及对传输原始数据的隐私关切,因此不理想。作为新兴的通信和计算模式,联邦学习(FL)为这一问题提供了潜在的最高解决方案。然而,我们表明,现有的FL解决方案在低地星座情景中并不适合,因为存在过长的趋同延迟和不可靠的无线通道等重大挑战。为此,我们提议采用高纬度平台作为分布参数服务器(PS),并提出同步的FedHAP算法,以便通过卫星间协作以有效的方式完成模型培训。为了加速趋同,我们还提议在卫星和HAP之间建立一个分层通信计划,使FHAP在几小时的趋同水平上达到比紧紧的同步水平。我们模拟的FDAP的模拟显示,从FDAP在几小时培训基准。