The emergence of 5G networks has enabled the deployment of a two-tier edge and vehicular-fog network. It comprises Multi-access Edge Computing (MEC) and Vehicular-Fogs (VFs), strategically positioned closer to Internet of Things (IoT) devices, reducing propagation latency compared to cloud-based solutions and ensuring satisfactory quality of service (QoS). However, during high-traffic events like concerts or athletic contests, MEC sites may face congestion and become overloaded. Utilizing offloading techniques, we can transfer computationally intensive tasks from resource-constrained devices to those with sufficient capacity, for accelerating tasks and extending device battery life. In this research, we consider offloading within a two-tier MEC and VF architecture, involving offloading from MEC to MEC and from MEC to VF. The primary objective is to minimize the average system cost, considering both latency and energy consumption. To achieve this goal, we formulate a multi-objective optimization problem aimed at minimizing latency and energy while considering given resource constraints. To facilitate decision-making for nearly optimal computational offloading, we design an equivalent reinforcement learning environment that accurately represents the network architecture and the formulated problem. To accomplish this, we propose a Distributed-TD3 (DTD3) approach, which builds on the TD3 algorithm. Extensive simulations, demonstrate that our strategy achieves faster convergence and higher efficiency compared to other benchmark solutions.
翻译:暂无翻译