Millions of sensors, cameras, meters, and other edge devices are deployed in networks to collect and analyse data. In many cases, such devices are powered only by Energy Harvesting(EH) and have limited energy available to analyse acquired data. When edge infrastructure is available, a device has a choice: to perform analysis locally or offload the task to other resource-rich devices such as cloudlet servers. However, such a choice carries a price in terms of consumed energy and accuracy. On the one hand, transmitting raw data can result in a higher energy cost in comparison to the required energy to process data locally. On the other hand, performing data analytics on servers can improve the task's accuracy. Additionally, due to the correlation between information sent by multiple devices, accuracy might not be affected if some edge devices decide to neither process nor send data and preserve energy instead. For such a scenario, we propose a Deep Reinforcement Learning (DRL) based solution capable of learning and adapting the policy to the time-varying energy arrival due to EH patterns. We leverage two datasets, one to model energy an EH device can collect and the other to model the correlation between cameras. Furthermore, we compare the proposed solution performance to three baseline policies. Our results show that we can increase accuracy by 15% in comparison to conventional approaches while preventing outages.
翻译:数以百万计的传感器、 相机、 仪表和其他边缘装置被安装在网络中, 用于收集和分析数据。 在许多情况下, 此类装置只能由能源采集( EH) 提供动力, 并且只有有限的能量来分析获得的数据。 如果有边缘基础设施, 设备可以选择: 在当地进行分析, 或者将任务卸载到其他资源丰富的设备, 如云端服务器 。 但是, 这种选择包含消耗的能量和准确性的价格 。 一方面, 传输原始数据可能会导致能源成本高于当地处理数据所需的能量。 另一方面, 在服务器上进行数据分析可以提高任务准确性。 此外, 由于多个设备发送的信息之间具有相关性, 如果某些边缘装置决定既不处理也不发送数据, 也保护能源, 则可能不会影响准确性 。 对于这种情形, 我们提议了一个深加力学习( DRL) 解决方案, 能够根据 EH 模式来学习和调整政策以适应时间变化的能量到达时间。 我们利用两个数据集, 一个用于模拟能源设备, 能够收集 EH, 而另一个用于模拟 EH 设备可以收集, 而另一个则由于多功能之间的 比较我们15 的模型, 我们用模型来显示我们之间 的精确性 。