Artificial intelligence and distributed algorithms have been widely used in mechanical fault diagnosis with the explosive growth of diagnostic data. A novel intelligent fault diagnosis system framework that allows intelligent terminals to offload computational tasks to Mobile edge computing (MEC) servers is provided in this paper, which can effectively address the problems of task processing delays and enhanced computational complexity. As the resources at the MEC and intelligent terminals are limited, performing reasonable resource allocation optimization can improve the performance, especially for a multi-terminals offloading system. In this study, to minimize the task computation delay, we jointly optimize the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection under a dynamic environment with stochastic task arrivals. The challenging dynamic joint optimization problem is formulated as a reinforcement learning (RL) problem, which is designed as the computational offloading policies to minimize the long-term average delay cost. Two deep RL strategies, deep Q-learning network (DQN) and deep deterministic policy gradient (DDPG), are adopted to learn the computational offloading policies adaptively and efficiently. The proposed DQN strategy takes the MEC selection as a unique action while using the convex optimization approach to obtain the local content splitting ratio and the transmission/computation power allocation. Simultaneously, the actions of the DDPG strategy are selected as all dynamic variables, including the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection. Numerical results demonstrate that both proposed strategies perform better than the traditional non-learning schemes.
翻译:由于诊断数据爆炸性增长,人工智能和分布式算法被广泛用于机械故障诊断,诊断数据爆炸性增长。本文件提供了一个新的智能故障诊断系统框架,使智能终端能够将计算任务卸载到移动边缘计算服务器(MEC),这可以有效解决任务处理延误和增加计算复杂性的问题。由于MEC和智能终端的资源有限,合理资源分配优化可以改善性能,特别是多端卸载系统。在这项研究中,为了最大限度地减少任务计算延迟,我们共同优化本地内容分割比率、传输/计算能力分配和MEC服务器选择等在动态环境中将计算任务卸载到移动边缘计算服务器(MEC)服务器。具有挑战性的动态联合优化问题被表述为强化学习(RL)问题,它设计为计算卸载政策,以尽量减少长期平均延迟成本。采用了两个深度的RL战略、深级Q-学习网络(DQN)和深度确定性差政策梯度(DPG),以学习计算本地内容分配的非计算能力分配政策,同时运用移动和同步传输战略,包括移动的DQ-Q选择战略,以更好的计算和同步传输方法,以显示移动式交付。拟议的DQ-Q-Q-Reval-real-real-redududud-resmal 战略,采用所有的计算方法,以演示-resmal-resmal-resmal-resulation-res-resmmmmal 。