Recently, the concept of open radio access network (O-RAN) has been proposed, which aims to adopt intelligence and openness in the next generation radio access networks (RAN). It provides standardized interfaces and the ability to host network applications from third-party vendors by x-applications (xAPPs), which enables higher flexibility for network management. However, this may lead to conflicts in network function implementations, especially when these functions are implemented by different vendors. In this paper, we aim to mitigate the conflicts between xAPPs for near-real-time (near-RT) radio intelligent controller (RIC) of O-RAN. In particular, we propose a team learning algorithm to enhance the performance of the network by increasing cooperation between xAPPs. We compare the team learning approach with independent deep Q-learning where network functions individually optimize resources. Our simulations show that team learning has better network performance under various user mobility and traffic loads. With 6 Mbps traffic load and 20 m/s user movement speed, team learning achieves 8% higher throughput and 64.8% lower PDR.
翻译:最近,提出了开放无线电接入网络(O-RAN)的概念,目的是在下一代无线电接入网络(RAN)中采用情报和开放性,提供标准化界面,并有能力通过x应用程序接收第三方供应商的网络应用程序,使网络管理具有更大的灵活性;然而,这可能导致网络功能实施中的冲突,特别是当这些功能由不同供应商执行时。在本文件中,我们旨在缓解XAPP在O-RAN近实时(近实时)无线电智能控制器(RIC)中的冲突。我们特别建议采用团队学习算法,通过增强xAPP之间的合作,提高网络的绩效。我们将团队学习方法与独立深入的Q-学习方法进行比较,因为网络个别地发挥优化资源的作用。我们的模拟表明,团队学习在各种用户移动和交通负荷下,网络表现更好。6 Mbps的流量和20 m/s用户移动速度,团队学习通过产出提高8%,低PDR获得64.8%。