Effective network congestion control strategies are key to keeping the Internet (or any large computer network) operational. Network congestion control has been dominated by hand-crafted heuristics for decades. Recently, ReinforcementLearning (RL) has emerged as an alternative to automatically optimize such control strategies. Research so far has primarily considered RL interfaces which block the sender while an agent considers its next action. This is largely an artifact of building on top of frameworks designed for RL in games (e.g. OpenAI Gym). However, this does not translate to real-world networking environments, where a network sender waiting on a policy without sending data leads to under-utilization of bandwidth. We instead propose to formulate congestion control with an asynchronous RL agent that handles delayed actions. We present MVFST-RL, a scalable framework for congestion control in the QUIC transport protocol that leverages state-of-the-art in asynchronous RL training with off-policy correction. We analyze modeling improvements to mitigate the deviation from Markovian dynamics, and evaluate our method on emulated networks from the Pantheon benchmark platform. The source code is publicly available at https://github.com/facebookresearch/mvfst-rl.
翻译:有效的网络拥堵控制战略是保持互联网(或任何大型计算机网络)运作的关键。 网络拥堵控制几十年来一直以手工制作的超光速控制为主。 最近,SergementLlearning(RL)已经出现,作为自动优化这种控制战略的替代方案。 到目前为止,研究主要审议了阻塞发件人的RL界面,而代理商则考虑其下一步行动。这基本上是在为RL设计的游戏框架(如OpenAI Gym)之上建起的工艺品。然而,这并没有转化为真实世界的网络环境,在这样的环境中,一个网络发送者等待一项政策而不发送数据导致频带利用不足。我们相反地提议用一个非同步RL(RL)代理器来制定拥堵控制拥堵控制。 我们提出MVFST-RL,这是QuIC运输协议中一个可扩缩的交通控制框架,在非同步的RL培训中利用State-fornous RL 的状态,进行非政策校正校正。 我们分析改进模型,以减少对Markovian的动态的偏差,并评估我们在Panfliflibasm/ searmaskread/ appregregregypal pral prestpal 平台上可提供。