Internet of Things (IoT) is considered as the enabling platform for a variety of promising applications, such as smart transportation and smart city, where massive devices are interconnected for data collection and processing. These IoT applications pose a high demand on storage and computing capacity, while the IoT devices are usually resource-constrained. As a potential solution, mobile edge computing (MEC) deploys cloud resources in the proximity of IoT devices so that their requests can be better served locally. In this work, we investigate computation offloading in a dynamic MEC system with multiple edge servers, where computational tasks with various requirements are dynamically generated by IoT devices and offloaded to MEC servers in a time-varying operating environment (e.g., channel condition changes over time). The objective of this work is to maximize the completed tasks before their respective deadlines and minimize energy consumption. To this end, we propose an end-to-end Deep Reinforcement Learning (DRL) approach to select the best edge server for offloading and allocate the optimal computational resource such that the expected long-term utility is maximized. The simulation results are provided to demonstrate that the proposed approach outperforms the existing methods.
翻译:互联网中的东西( IoT) 被认为是各种有希望应用程序的赋能平台,例如智能交通和智能城市,其中大型设备为数据收集和处理而相互连接。这些 IoT 应用程序对存储和计算能力提出了很高的需求,而IoT 设备通常受到资源限制。作为一个潜在的解决方案,移动边缘计算(MEC)在IoT 设备附近部署云资源,以便更好地在当地满足他们的要求。在这项工作中,我们调查在具有多个边缘服务器的动态MEC 系统中进行卸载的计算,该系统中有多种要求的计算任务由IoT 设备动态生成,并在一个时间变化的操作环境中被卸载到MEC 服务器(例如,频道条件随时间变化) 。这项工作的目标是在完成的任务之前尽量扩大,并尽量减少能源消耗。 为此,我们提议一个端到端深强化学习(DRL) 方法,选择卸载的最佳边端服务器,并分配最佳计算资源,以便预期的长期效用最大化。提供了模拟结果,以证明拟议的方法外形。