Benefiting from the fusion of communication and intelligent technologies, network-enabled robots have become important to support future machine-assisted and unmanned applications. To provide high-quality services for robots in wide areas, hybrid satellite-terrestrial networks are a key technology. Through hybrid networks, computation-intensive and latency-sensitive tasks can be offloaded to mobile edge computing (MEC) servers. However, due to the mobility of mobile robots and unreliable wireless network environments, excessive local computations and frequent service migrations may significantly increase the service delay. To address this issue, this paper aims to minimize the average task completion time for MEC-based offloading initiated by satellite-terrestrial-network-enabled robots. Different from conventional mobility-aware schemes, the proposed scheme makes the offloading decision by jointly considering the mobility control of robots. A joint optimization problem of task offloading and velocity control is formulated. Using Lyapunov optimization, the original optimization is decomposed into a velocity control subproblem and a task offloading subproblem. Then, based on the Markov decision process (MDP), a dual-agent reinforcement learning (RL) algorithm is proposed. The convergence and complexity of the improved RL algorithm are theoretically analyzed, and the simulation results show that the proposed scheme can effectively reduce the offloading delay.
翻译:随着通信和智能技术的融合,网络化机器人已经成为支持未来机器辅助和无人的重要应用。为了在广阔的地区为机器人提供高质量的服务,混合卫星-地面网络是一个关键技术。通过混合网络,计算密集型和延迟敏感的任务可以卸载到移动边缘计算(MEC)服务器中。然而,由于移动机器人的移动性和不可靠的无线网络环境,过多的本地计算和频繁的服务迁移可能会显着增加服务延迟。为了解决这个问题,本文旨在最小化卫星-地面网络机器人发起的基于MEC的卸载的平均任务完成时间。与传统的移动性感知方案不同,所提出的方案通过共同考虑机器人的移动性控制来做出卸载决策。制定了任务卸载和速度控制的联合优化问题。使用Lyapunov优化,将原始优化问题分解为速度控制子问题和任务卸载子问题。然后,基于马尔可夫决策过程(MDP),提出了一个双代理强化学习(RL)算法。理论上分析了改进的RL算法的收敛和复杂性,并且模拟结果表明所提出的方案可以有效地减少卸载延迟。