The interconnection of vehicles in the future fifth generation (5G) wireless ecosystem forms the so-called Internet of vehicles (IoV). IoV offers new kinds of applications requiring delay-sensitive, compute-intensive and bandwidth-hungry services. Mobile edge computing (MEC) and network slicing (NS) are two of the key enabler technologies in 5G networks that can be used to optimize the allocation of the network resources and guarantee the diverse requirements of IoV applications. As traditional model-based optimization techniques generally end up with NP-hard and strongly non-convex and non-linear mathematical programming formulations, in this paper, we introduce a model-free approach based on deep reinforcement learning (DRL) to solve the resource allocation problem in MEC-enabled IoV network based on network slicing. Furthermore, the solution uses non-orthogonal multiple access (NOMA) to enable a better exploitation of the scarce channel resources. The considered problem addresses jointly the channel and power allocation, the slice selection and the vehicles selection (vehicles grouping). We model the problem as a single-agent Markov decision process. Then, we solve it using DRL using the well-known DQL algorithm. We show that our approach is robust and effective under different network conditions compared to benchmark solutions.
翻译:未来第五代(5G)无线生态系统中的车辆互联构成了所谓的车辆互联网(IoV)。IoV提供了需要延迟敏感、计算密集和带宽饥饿服务的新型应用。移动边缘计算(MEC)和网络切片(NS)是5G网络中的两项关键助推技术,可用于优化网络资源的分配和保证IoV应用的各种要求。传统的基于模式的优化技术通常以NP-硬的和强烈的非凝固的和非线性数学编程配制结束。在本文件中,我们采用了一种基于深度强化学习(DRL)的无模式方法来解决MEC驱动的IoV网络资源分配问题。此外,解决方案使用非垂直多功能的多功能接入(NOMA)来更好地利用稀有的频道资源。所考虑的问题涉及渠道和电力分配、切片选择和车辆选择(车辆组合)等。我们以单一代理人Markov决策程序为问题模型。然后,我们用不同的DL基准方法来比较我们所认识的网络。