The move toward artificial intelligence (AI)-native sixth-generation (6G) networks has put more emphasis on the importance of explainability and trustworthiness in network management operations, especially for mission-critical use-cases. Such desired trust transcends traditional post-hoc explainable AI (XAI) methods to using contextual explanations for guiding the learning process in an in-hoc way. This paper proposes a novel graph reinforcement learning (GRL) framework named TANGO which relies on a symbolic subsystem. It consists of a Bayesian-graph neural network (GNN) Explainer, whose outputs, in terms of edge/node importance and uncertainty, are periodically translated to a logical GRL reward function. This adjustment is accomplished through defined symbolic reasoning rules within a Reasoner. Considering a real-world testbed proof-of-concept (PoC), a gNodeB (gNB) radio resource allocation problem is formulated, which aims to minimize under- and over-provisioning of physical resource blocks (PRBs) while penalizing decisions emanating from the uncertain and less important edge-nodes relations. Our findings reveal that the proposed in-hoc explainability solution significantly expedites convergence compared to standard GRL baseline and other benchmarks in the deep reinforcement learning (DRL) domain. The experiment evaluates performance in AI, complexity, energy consumption, robustness, network, scalability, and explainability metrics. Specifically, the results show that TANGO achieves a noteworthy accuracy of 96.39% in terms of optimal PRB allocation in inference phase, outperforming the baseline by 1.22x.
翻译:暂无翻译