The fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs). Due to the limited resource of F-APs, it is important to design an efficient task offloading scheme. In this paper, by considering time-varying network environment, a dynamic computation offloading and resource allocation problem in F-RANs is formulated to minimize the task execution delay and energy consumption of MDs. To solve the problem, a federated deep reinforcement learning (DRL) based algorithm is proposed, where the deep deterministic policy gradient (DDPG) algorithm performs computation offloading and resource allocation in each F-AP. Federated learning is exploited to train the DDPG agents in order to decrease the computing complexity of training process and protect the user privacy. Simulation results show that the proposed federated DDPG algorithm can achieve lower task execution delay and energy consumption of MDs more quickly compared with the other existing strategies.
翻译:雾无线电存取网络(F-RAN)是一个很有希望的技术,用户移动设备(MDs)可以借此将计算任务卸至附近的雾存取点(F-APs),由于F-APs资源有限,必须设计一个高效的卸载任务计划。在本文中,考虑到时间变化的网络环境,F-RANs开发了一个动态的卸载和资源分配问题,以尽量减少任务执行延迟和MDs能源消耗。为了解决问题,提议了一个基于联合深度强化学习(DRL)的算法,其中深度确定性政策梯度(DDPG)算法对每个F-AP(DPG)的卸载和资源分配进行计算。联邦学习被用来培训DDPG代理,以减少培训过程的计算复杂性和保护用户隐私。模拟结果表明,拟议的Federate DDPG算法可以比其他现有战略更快地减少任务执行延误和MDs能源消耗。