In this work, we study the problem of energy-efficient computation offloading enabled by edge computing. In the considered scenario, multiple users simultaneously compete for limited radio and edge computing resources to get offloaded tasks processed under a delay constraint, with the possibility of exploiting low power sleep modes at all network nodes. The radio resource allocation takes into account inter- and intra-cell interference, and the duty cycles of the radio and computing equipment have to be jointly optimized to minimize the overall energy consumption. To address this issue, we formulate the underlying problem as a dynamic long-term optimization. Then, based on Lyapunov stochastic optimization tools, we decouple the formulated problem into a CPU scheduling problem and a radio resource allocation problem to be solved in a per-slot basis. Whereas the first one can be optimally and efficiently solved using a fast iterative algorithm, the second one is solved using distributed multi-agent reinforcement learning due to its non-convexity and NP-hardness. The resulting framework achieves up to 96.5% performance of the optimal strategy based on exhaustive search, while drastically reducing complexity. The proposed solution also allows to increase the network's energy efficiency compared to a benchmark heuristic approach.
翻译:在这项工作中,我们研究了边际计算所促成的节能计算卸载问题。在所考虑的情景中,多个用户同时竞争有限的无线电和边端计算资源,以便在延迟限制下处理被卸载的任务,并有可能在所有网络节点利用低功率睡眠模式。无线电资源分配考虑到跨和内部干扰,无线电和计算机设备的责任周期必须共同优化,以尽量减少总体能源消耗;为了解决这一问题,我们把根本问题设计成动态的长期优化。然后,根据Lyapunov 的随机优化工具,我们将所拟订的问题分解成一个CPU排期问题和无线电资源分配问题,每个点都要解决。虽然第一个问题可以通过快速迭接算法得到最佳和高效解决,但第二个问题是使用分布式多试剂强化学习来解决的。由此产生的框架在彻底搜索的基础上实现最佳战略96.5%的绩效,同时大幅度降低复杂性。拟议的解决方案还可以将网络的能源效率比作基准。