Congestion Control (CC), as the core networking task to efficiently utilize network capacity, received great attention and widely used in various Internet communication applications such as 5G, Internet-of-Things, UAN, and more. Various CC algorithms have been proposed both on network and transport layers such as Active Queue Management (AQM) algorithm and Transmission Control Protocol (TCP) congestion control mechanism. But it is hard to model dynamic AQM/TCP system and cooperate two algorithms to obtain excellent performance under different communication scenarios. In this paper, we explore the performance of multi-agent reinforcement learning-based cross-layer congestion control algorithms and present cooperation performance of two agents, known as MACC (Multi-agent Congestion Control). We implement MACC in NS3. The simulation results show that our scheme outperforms other congestion control combination in terms of throughput and delay, etc. Not only does it proves that networking protocols based on multi-agent deep reinforcement learning is efficient for communication managing, but also verifies that networking area can be used as new playground for machine learning algorithms.
翻译:作为有效利用网络能力的核心网络化任务,电传控制(CC)作为高效利用网络能力的核心网络化任务,受到极大关注,并被广泛用于各种互联网通信应用,如5G、互联网连接控制算法、UAN等。在网络和运输层,如主动电排管理算法和传输控制协议(TCP)拥堵控制机制,提出了各种CC算法。但很难建模动态AQM/TCP系统,并合作两种算法,以在不同通信情景下取得优异的性能。在本文中,我们探讨了多试剂强化学习跨层阻塞控制算法的性能,以及被称为MCC(MACC)的两个代理商目前的合作性能。我们在NS3中实施了MACC。模拟结果表明,我们的计划在吞吐量和延迟等方面优于其他阻塞控制组合。它不仅证明基于多剂深度加固学习的联网协议对于通信管理是有效的,而且还证实网络化区域可以用作机器学习算法的新操场。