With the emergence of new application areas such as cyber-physical systems and human-in-the-loop applications ensuring a specific level of end-to-end network latency with high reliability (e.g., 99.9%) is becoming increasingly critical. To align wireless links with these reliability requirements, it is essential to analyze and control network latency in terms of its full probability distribution. However, in a wireless link, the distribution may vary over time, making this task particularly challenging. We propose predicting the latency distribution using state-of-the-art data-driven techniques that leverage historical network information. Our approach tokenizes network state information and processes it using temporal deep-learning architectures-namely LSTM and Transformer models-to capture both short- and long-term delay dependencies. These models output parameters for a chosen parametric density via a mixture density network with Gaussian mixtures, yielding multi-step probabilistic forecasts of future delays. To validate our proposed approach, we implemented and tested these methods using a time-synchronized, SDR-based OpenAirInterface 5G testbed to collect and preprocess network-delay data. Our experiments show that the Transformer model achieves lower negative log-likelihood and mean absolute error than both LSTM and feed-forward baselines in challenging scenarios, while also providing insights into model complexity and training/inference overhead. This framework enables more informed decision-making for adaptive scheduling and resource allocation, paving the way toward enhanced QoS in evolving 5G and 6G networks.
翻译:暂无翻译