We consider a network of smart sensors for edge computing application that sample a signal of interest and send updates to a base station for remote global monitoring. Sensors are equipped with sensing and compute, and can either send raw data or process them on-board before transmission. Limited hardware resources at the edge generate a fundamental latency-accuracy trade-off: raw measurements are inaccurate but timely, whereas accurate processed updates are available after computational delay. Also, if sensor on-board processing entails data compression, latency caused by wireless communication might be higher for raw measurements. Hence, one needs to decide when sensors should transmit raw measurements or rely on local processing to maximize overall network performance. To tackle this sensing design problem, we model an estimation-theoretic optimization framework that embeds computation and communication delays, and propose a Reinforcement Learning-based approach to dynamically allocate computational resources at each sensor. Effectiveness of our proposed approach is validated through numerical simulations with case studies motivated by the Internet of Drones and self-driving vehicles.
翻译:我们考虑边缘计算应用中的智能传感器网络,它们采样所需信号并将更新发送至基站进行远程全局监控。传感器的硬件资源有限,产生了基于延迟和准确度的基本权衡:原始测量结果准确但及时,而经过处理的准确更新则要在计算延迟之后才会出现。此外,如果传感器的板载处理还涉及数据压缩,则对于原始测量来说,通信造成的延迟可能更高。因此,需要决定传感器何时应传送原始测量或倚赖本地处理以最大化整个网络的性能。为了解决这个感知设计问题,我们建立了一个嵌入计算和通信延迟的估计-优化框架,并提出了一种基于强化学习的方法,在每个传感器上动态分配计算资源。通过类似于无人机互联网和自动驾驶车辆的案例研究,验证了我们提出方法的有效性。