As communication protocols evolve, datacenter network utilization increases. As a result, congestion is more frequent, causing higher latency and packet loss. Combined with the increasing complexity of workloads, manual design of congestion control (CC) algorithms becomes extremely difficult. This calls for the development of AI approaches to replace the human effort. Unfortunately, it is currently not possible to deploy AI models on network devices due to their limited computational capabilities. Here, we offer a solution to this problem by building a computationally-light solution based on a recent reinforcement learning CC algorithm [arXiv:2207.02295]. We reduce the inference time of RL-CC by x500 by distilling its complex neural network into decision trees. This transformation enables real-time inference within the $\mu$-sec decision-time requirement, with a negligible effect on quality. We deploy the transformed policy on NVIDIA NICs in a live cluster. Compared to popular CC algorithms used in production, RL-CC is the only method that performs well on all benchmarks tested over a large range of number of flows. It balances multiple metrics simultaneously: bandwidth, latency, and packet drops. These results suggest that data-driven methods for CC are feasible, challenging the prior belief that handcrafted heuristics are necessary to achieve optimal performance.
翻译:由于通信协议不断演变,数据中心网络利用率增加。因此,拥堵现象更加频繁,造成更高的延时率和包装损失。加之工作量日益复杂,人工设计拥堵控制算法变得极为困难。这要求开发AI方法以取代人类的努力。不幸的是,目前无法在网络设备上部署AI模型,因为计算能力有限,因此无法在网络设备上部署AI模型。在这里,我们根据最近的强化学习CC算法[arXiv:22007.02295]来解决这个问题。我们通过将复杂的神经网络蒸馏到决策树来减少RL-CC的推导时间x500。这种转换使得在$\mu-sec决定时间要求范围内实时推推,对质量的影响微不足道。我们把关于NVIDIA国家信息中心的改变政策放到一个现场集群中。与在生产中使用的流行CC算法相比,RL-CC是唯一在大量流动中经过测试的所有基准上进行良好测量的唯一方法。它平衡了多种标准,将精密的神经网络网络注入决策树中。这种转换方法可以同时在$mu-sec-se-semeal be the eximal laphilizational be the laphilizational bes