In recent years, autonomous networks have been designed with Predictive Quality of Service (PQoS) in mind, as a means for applications operating in the industrial and/or automotive sectors to predict unanticipated Quality of Service (QoS) changes and react accordingly. In this context, Reinforcement Learning (RL) has come out as a promising approach to perform accurate predictions, and optimize the efficiency and adaptability of wireless networks. Along these lines, in this paper we propose the design of a new entity, implemented at the RAN-level that, with the support of an RL framework, implements PQoS functionalities. Specifically, we focus on the design of the reward function of the learning agent, able to convert QoS estimates into appropriate countermeasures if QoS requirements are not satisfied. We demonstrate via ns-3 simulations that our approach achieves the best trade-off in terms of QoS and Quality of Experience (QoE) performance of end users in a teleoperated-driving-like scenario, compared to other baseline solutions.
翻译:近年来,在设计自主网络时,考虑到服务的可预测性质量(PQOS),作为工业和/或汽车部门应用应用的一种手段,以预测意外的服务质量变化,并作出相应反应;在这方面,强化学习(RL)作为进行准确预测和优化无线网络效率和适应性的一种有希望的方法,在设计无线网络时,我们建议设计一个新的实体,在RAN一级实施,在RL框架的支持下,实施PQS功能;具体地说,我们侧重于设计学习代理的奖励功能,在QOS要求不能满足时,能够将QOS估计数转换为适当的对策;我们通过ns-3的模拟表明,我们的方法在远程驱动情景下实现了终端用户在QOS和经验质量方面的最佳交易,与其他基线解决方案相比,我们通过Ns-3的模拟表明,我们的方法在远程驱动式驱动情景下实现了最佳的终端用户业绩。