Recent research has shown the potential of using available mobile fog devices (such as smartphones, drones, domestic and industrial robots) as relays to minimize communication outages between sensors and destination devices, where localized Internet-of-Things services (e.g., manufacturing process control, health and security monitoring) are delivered. However, these mobile relays deplete energy when they move and transmit to distant destinations. As such, power-control mechanisms and intelligent mobility of the relay devices are critical in improving communication performance and energy utilization. In this paper, we propose a Q-learning-based decentralized approach where each mobile fog relay agent (MFRA) is controlled by an autonomous agent which uses reinforcement learning to simultaneously improve communication performance and energy utilization. Each autonomous agent learns based on the feedback from the destination and its own energy levels whether to remain active and forward the message, or become passive for that transmission phase. We evaluate the approach by comparing with the centralized approach, and observe that with lesser number of MFRAs, our approach is able to ensure reliable delivery of data and reduce overall energy cost by 56.76\% -- 88.03\%.
翻译:最近的研究显示,使用移动雾装置(如智能手机、无人机、家用和工业机器人)作为继电器,尽量减少传感器和目的地装置之间的通信中断,提供局部性网络任务服务(如制造过程控制、健康和安全监测),但这些移动式继电器在移动和传送到遥远的目的地时消耗能源,因此,电控机制和中继装置的智能机动机动性对于提高通信性能和能源利用至关重要。在本文件中,我们提议采用基于学习的分散化方法,使每个移动雾中继剂(MFRA)由一个自主代理控制,该代理利用强化学习来同时改善通信性能和能源利用。每个自主代理根据目的地的反馈及其自己的能源水平学习,是保持主动和传递信息,还是对传输阶段变得被动。我们通过比较中央化方法来评价这一方法,并且指出,与较少的移动雾中继器相比,我们的方法能够确保数据的可靠传送,并将总能源费用减少56.76 ⁇ -88.03 ⁇ 。