Low Earth Orbit (LEO) satellites play a crucial role in the development of 6G mobile networks and space-air-ground integrated systems. Recent advancements in space technology have empowered LEO satellites with the capability to run AI applications. However, centralized approaches, where ground stations (GSs) act as servers and satellites as clients, often encounter slow convergence and inefficiencies due to intermittent connectivity between satellites and GSs. In contrast, decentralized federated learning (DFL) offers a promising alternative by facilitating direct communication between satellites (clients) via inter-satellite links (ISLs). However, inter-plane ISLs connecting satellites from different orbital planes are dynamic due to Doppler shifts and pointing limitations. This could impact model propagation and lead to slower convergence. To mitigate these issues, we propose DFedSat, a fully decentralized federated learning framework tailored for LEO satellites. DFedSat accelerates the training process by employing two adaptive mechanisms for intra-plane and inter-plane model aggregation, respectively. Furthermore, a self-compensation mechanism is integrated to enhance the robustness of inter-plane ISLs against transmission failure. Additionally, we derive the sublinear convergence rate for the non-convex case of DFedSat. Extensive experimental results demonstrate DFedSat's superiority over other DFL baselines regarding convergence rate, communication efficiency, and resilience to unreliable links.
翻译:暂无翻译