Generalizing machine learning (ML) models for network traffic dynamics tends to be considered a lost cause. Hence, for every new task, we often resolve to design new models and train them on model-specific datasets collected, whenever possible, in an environment mimicking the model's deployment. This approach essentially gives up on generalization. Yet, an ML architecture called_Transformer_ has enabled previously unimaginable generalization in other domains. Nowadays, one can download a model pre-trained on massive datasets and only fine-tune it for a specific task and context with comparatively little time and data. These fine-tuned models are now state-of-the-art for many benchmarks. We believe this progress could translate to networking and propose a Network Traffic Transformer (NTT), a transformer adapted to learn network dynamics from packet traces. Our initial results are promising: NTT seems able to generalize to new prediction tasks and contexts. This study suggests there is still hope for generalization, though it calls for a lot of future research.
翻译:网络交通动态的集成机器学习模式(ML)往往被认为是一个失败的原因。 因此,对于每一个新的任务,我们往往决心设计新的模式,并尽可能在模拟模型部署的环境里用收集的模型数据集对其进行培训。 这种方法基本上放弃了一般化。 然而,一个称为_ Transform_的ML结构使得在其他领域能够实现先前无法想象的概括化。 如今,我们可以下载一个经过预先训练的大规模数据集模型,并且只能对它进行微调,用于特定的任务和背景,而时间和数据相对较少。这些经过微调的模型现在已经是许多基准的最新模型。 我们认为,这一进展可以转化成网络化和提议一个网络交通变异器(NTT),这个变异器可以用来从包迹中学习网络动态。 我们的初步结果很有希望: NTT似乎能够概括新的预测任务和背景。 这个研究显示,仍然有希望实现概括化,尽管它需要在未来进行大量研究。