Machine learning (ML) is a powerful tool to model the complexity of communication networks. As networks evolve, we cannot only train once and deploy. Retraining models, known as continual learning, is necessary. Yet, to date, there is no established methodology to answer the key questions: With which samples to retrain? When should we retrain? We address these questions with the sample selection system Memento, which maintains a training set with the "most useful" samples to maximize sample space coverage. Memento particularly benefits rare patterns -- the notoriously long "tail" in networking -- and allows assessing rationally when retraining may help, i.e., when the coverage changes. We deployed Memento on Puffer, the live-TV streaming project, and achieved a 14% reduction of stall time, 3.5x the improvement of random sample selection. Finally, Memento does not depend on a specific model architecture; it is likely to yield benefits in other ML-based networking applications.
翻译:暂无翻译