With the growing demand for latency-critical and computation-intensive Internet of Things (IoT) services, the IoT-oriented network architecture, mobile edge computing (MEC), has emerged as a promising technique to reinforce the computation capability of the resource-constrained IoT devices. To exploit the cloud-like functions at the network edge, service caching has been implemented to reuse the computation task input/output data, thus effectively reducing the delay incurred by data retransmissions and repeated execution of the same task. In a multi-user cache-assisted MEC system, users' preferences for different types of services, possibly dependent on their locations, play an important role in joint design of communication, computation and service caching. In this paper, we consider multiple representative locations, where users at the same location share the same preference profile for a given set of services. Specifically, by exploiting the location-aware users' preference profiles, we propose joint optimization of the binary cache placement, the edge computation resource and the bandwidth allocation to minimize the expected sum-energy consumption, subject to the bandwidth and the computation limitations as well as the service latency constraints. To effectively solve the mixed-integer non-convex problem, we propose a deep learning (DL)-based offline cache placement scheme using a novel stochastic quantization based discrete-action generation method. The proposed hybrid learning framework advocates both benefits from the model-free DL approach and the model-based optimization. The simulations verify that the proposed DL-based scheme saves roughly 33% and 6.69% of energy consumption compared with the greedy caching and the popular caching, respectively, while achieving up to 99.01% of the optimal performance.
翻译:随着对99种条件的读取关键和计算密集型互联网(IoT)服务的日益增长的需求,IoT导向网络架构、移动边缘计算(MEC)已经成为加强资源限制的 IoT 设备计算能力的一个大有希望的技术。为了利用网络边缘的云型功能,实施了服务缓冲,以重新使用计算任务输入/输出数据,从而有效地减少数据再传输和重复执行同一任务造成的延迟。在多用户存储缓存辅助 MEC 系统中,用户对不同类型服务的偏好,可能取决于他们的位置,在联合设计通信、计算和服务缓冲方面发挥着重要作用。在本文件中,我们考虑到多个具有代表性的地点,在同一地点的用户共享特定服务组同的偏好配置。具体地,我们利用定位用户的偏好配置图,提议联合优化基于存储模式的预置存储模型定位、边端计算资源和带宽分配,以尽量减少预期的消费总量,但须符合最佳带宽和计算限制,以及基于不精确度的计算限制,以及服务升级的延迟度,同时提出基于深度学习方法。