This paper proposes a fully scalable multi-agent reinforcement learning (MARL) approach for packet scheduling in conflict graphs, aiming to minimizing average packet delays. Each agent autonomously manages the schedule of a single link over one or multiple sub-bands, considering its own state and states of conflicting links. The problem can be conceptualized as a decentralized partially observable Markov decision process (Dec-POMDP). The proposed solution leverages an on-policy reinforcement learning algorithms multi-agent proximal policy optimization (MAPPO) within a multi-agent networked system, incorporating advanced recurrent structures in the neural network. The MARL design allows for fully decentralized training and execution, seamlessly scaling to very large networks. Extensive simulations across a diverse range of conflict graphs demonstrate that the proposed solution compares favorably to well-established schedulers in terms of both throughput and delay under various traffic conditions.
翻译:暂无翻译