This paper investigates the integration of Open Radio Access Network (O-RAN) within non-terrestrial networks (NTN), and optimizing the dynamic functional split between Centralized Units (CU) and Distributed Units (DU) for enhanced energy efficiency in the network. We introduce a novel framework utilizing a Deep Q-Network (DQN)-based reinforcement learning approach to dynamically find the optimal RAN functional split option and the best NTN-based RAN network out of the available NTN-platforms according to real-time conditions, traffic demands, and limited energy resources in NTN platforms. This approach supports capability of adapting to various NTN-based RANs across different platforms such as LEO satellites and high-altitude platform stations (HAPS), enabling adaptive network reconfiguration to ensure optimal service quality and energy utilization. Simulation results validate the effectiveness of our method, offering significant improvements in energy efficiency and sustainability under diverse NTN scenarios.
翻译:暂无翻译