With the emergence of new technologies, computer networks are becoming more structurally complex, diverse and heterogenous. The increasing discrepancy (among the interconnected networks) in data rates, delays, packet loss, and transmission scenarios, influence significantly the dynamics of congestion control (CC) parametrization. In contrast to the traditional endto-end CC algorithms that rely on strict rules, new approaches aim to involve machine learning in order to continuously adapt the CC to real-time network requirements. However, due to the high computational complexity and memory consumption, the feasibility of these schemes may still be questioned. This paper surveys selected machine-learning based approaches to CC and proposes a roadmap to their implementation in computer systems, by using dataflow computing and Gallium Arsenide (GaAs) chips.
翻译:随着新技术的出现,计算机网络在结构上变得日益复杂、多样化和多样化,数据率、延误、包装损失和传输假设情况(在相互关联的网络中)日益扩大的差异(在相互连接的网络中)对拥堵控制(CC)的动态产生了重大影响,与传统的端至端CC算法不同,后者依赖严格的规则,新的方法旨在让机器学习,以便不断使CC适应实时网络的要求,然而,由于计算复杂性和记忆消耗很高,这些计划的可行性可能仍然受到质疑。本文通过数据流计算和GaAs芯片,对CC选定基于机器学习的方法进行了调查,并提出了在计算机系统中实施这些方法的路线图。