Mobile edge computing (MEC) is a promising paradigm to meet the quality of service (QoS) requirements of latency-sensitive IoT applications. However, attackers may eavesdrop on the offloading decisions to infer the edge server's (ES's) queue information and users' usage patterns, thereby incurring the pattern privacy (PP) issue. Therefore, we propose an offloading strategy which jointly minimizes the latency, ES's energy consumption, and task dropping rate, while preserving PP. Firstly, we formulate the dynamic computation offloading procedure as a Markov decision process (MDP). Next, we develop a Differential Privacy Deep Q-learning based Offloading (DP-DQO) algorithm to solve this problem while addressing the PP issue by injecting noise into the generated offloading decisions. This is achieved by modifying the deep Q-network (DQN) with a Function-output Gaussian process mechanism. We provide a theoretical privacy guarantee and a utility guarantee (learning error bound) for the DP-DQO algorithm and finally, conduct simulations to evaluate the performance of our proposed algorithm by comparing it with greedy and DQN-based algorithms.
翻译:移动边缘计算(MEC)是符合长期敏感 IoT 应用程序服务质量要求(Qos)的一个很有希望的范例。然而,攻击者可能会对卸载决定进行窃听,以推断边缘服务器(ES)队列信息和用户使用模式,从而引发模式隐私问题。因此,我们提出一个卸载战略,在保留PP的同时,共同最大限度地减少潜伏、ES的能源消耗和任务下降率。首先,我们将动态计算卸载程序作为Markov 的决策过程(MDP) 。接下来,我们开发了基于卸载的不同隐私深度Q学习算法(DP-DQO),以解决这一问题,同时通过将噪音注入生成的卸载决定中来解决这一问题。这是通过用功能输出高斯进程机制修改深Q网络(DQN)实现的。我们为基于 DP-DQO 的算法提供了理论隐私权保障和公用事业保证(学习错误),最后,我们通过模拟来评估基于 DQ 算法的贪婪性算法的业绩。