Using Service Function Chaining (SFC) in wireless networks became popular in many domains like networking and multimedia. It relies on allocating network resources to incoming SFCs requests, via a Virtual Network Embedding (VNE) algorithm, so that it optimizes the performance of the SFC. When the load of incoming requests -- competing for the limited network resources - increases, it becomes challenging to decide which requests should be admitted and which one should be rejected. In this work, we propose a deep Reinforcement learning (RL) solution that can learn the admission policy for different dependencies, such as the service lifetime and the priority of incoming requests. We compare the deep RL solution to a first-come-first-serve baseline that admits a request whenever there are available resources. We show that deep RL outperforms the baseline and provides higher acceptance rate with low rejections even when there are enough resources.
翻译:在网络和多媒体等许多领域,无线网络使用功能链(SFC)变得很受欢迎。它依靠通过虚拟网络嵌入算法将网络资源分配给接收的SFCs的要求,从而优化SFC的性能。当收到的请求量 -- -- 争夺有限的网络资源 -- -- 增加时,决定哪些请求应予接受,哪些请求应予拒绝就变得很困难。在这项工作中,我们提出了一个深强化学习(RL)解决方案,可以学习不同依赖点的接纳政策,例如服务寿命和接收请求的优先次序。我们将深度RL解决方案与一旦有资源就接受请求的先入为主基线进行比较。我们显示,深度RL超过基线,即使有足够资源,也以较低的拒绝率提供更高的接受率。