Cloud datacenters are exponentially growing both in numbers and size. This increase results in a network activity surge that warrants better congestion avoidance. The resulting challenge is two-fold: (i) designing algorithms that can be custom-tuned to the complex traffic patterns of a given datacenter; but, at the same time (ii) run on low-level hardware with the required low latency of effective Congestion Control (CC). In this work, we present a Reinforcement Learning (RL) based CC solution that learns from certain traffic scenarios and successfully generalizes to others. We then distill the RL neural network policy into binary decision trees to achieve the desired $\mu$sec decision latency required for real-time inference with RDMA. We deploy the distilled policy on NVIDIA NICs in a real network and demonstrate state-of-the-art performance, balancing all tested metrics simultaneously: bandwidth, latency, fairness, and packet drops.
翻译:云中数据中心在数量和大小上都成指数增长。 这一增长导致网络活动激增,需要更好地避免堵塞。 由此产生的挑战有两个方面:(一) 设计算法,能够定制适应特定数据中心复杂的交通模式;但与此同时(二) 运行低水平的硬件,所需的有效收缩控制(CC)低等。在这项工作中,我们提出了一个基于强化学习(RL)的CC 解决方案,该解决方案从某些交通情况中学习,并成功地向其他人推广。然后,我们把RL神经网络政策提炼成二进二进制决策树,以便实现与RDMA进行实时推算所需的所需的 $\ musec 决定延缓度。我们把关于NVIDIA NICs 的蒸馏政策应用到一个真实网络中,并展示最新的业绩,同时平衡所有测试的计量: 带宽、 纬度、 公平 和 包滴。