The improvements in the edge computing technology pave the road for diversified applications that demand real-time interaction. However, due to the mobility of the end-users and the dynamic edge environment, it becomes challenging to handle the task offloading with high performance. Moreover, since each application in mobile devices has different characteristics, a task orchestrator must be adaptive and have the ability to learn the dynamics of the environment. For this purpose, we develop a deep reinforcement learning based task orchestrator, DeepEdge, which learns to meet different task requirements without needing human interaction even under the heavily-loaded stochastic network conditions in terms of mobile users and applications. Given the dynamic offloading requests and time-varying communication conditions, we successfully model the problem as a Markov process and then apply the Double Deep Q-Network (DDQN) algorithm to implement DeepEdge. To evaluate the robustness of DeepEdge, we experiment with four different applications including image rendering, infotainment, pervasive health, and augmented reality in the network under various loads. Furthermore, we compare the performance of our agent with the four different task offloading approaches in the literature. Our results show that DeepEdge outperforms its competitors in terms of the percentage of satisfactorily completed tasks.
翻译:边际计算技术的改进为需要实时互动的多样化应用铺平了道路。然而,由于终端用户的流动性和动态边际环境,以高性能处理卸载的任务变得很困难。此外,由于移动设备中的每一应用程序都有不同的特性,任务管弦师必须具有适应性,并有能力学习环境动态。为此,我们开发了一个深强化学习基于任务管弦师DeepEdge,它学会满足不同的任务要求,而无需人际互动,即使在大量投入的移动用户和应用程序的随机网络条件下也是如此。考虑到动态卸载请求和时间变化式通信条件,我们成功地将问题模拟成马尔科夫进程,然后应用双深QNetwork(DQQN)算法来实施DeepEdge(DQQN)的动态。为了评估深 Edge的稳健性,我们实验了四种不同的应用程序,包括图像的生成、受欢迎性、普遍健康以及网络在各种负荷下扩大现实。此外,我们比较了我们的代理的绩效与四个不同的任务的深度任务中完成的竞争力。