We consider a network of smart sensors for edge computing application that sample a signal of interest and send updates to a base station for remote global monitoring. Sensors are equipped with sensing and compute, and can either send raw data or process them on-board before transmission. Limited hardware resources at the edge generate a fundamental latency-accuracy trade-off: raw measurements are inaccurate but timely, whereas accurate processed updates are available after computational delay. Also, if sensor on-board processing entails data compression, latency caused by wireless communication might be higher for raw measurements. Hence, one needs to decide when sensors should transmit raw measurements or rely on local processing to maximize overall network performance. To tackle this sensing design problem, we model an estimation-theoretic optimization framework that embeds computation and communication delays, and propose a Reinforcement Learning-based approach to dynamically allocate computational resources at each sensor. Effectiveness of our proposed approach is validated through numerical simulations with case studies motivated by the Internet of Drones and self-driving vehicles.
翻译:我们考虑建立一个精密的边际计算应用传感器网络,对感兴趣的信号进行取样,并向一个基地站发送更新,以便进行全球远程监测。传感器配备了遥感和计算设备,可以发送原始数据,或者在传输前在船上处理这些数据。边缘的有限硬件资源产生了一种基本的延缓-准确性交易:原始测量不准确,但及时,而在计算延迟后可以提供准确的处理更新。此外,如果机载传感器需要数据压缩,那么无线通信造成的延迟可能更高。因此,人们需要决定传感器何时传输原始测量或依赖本地处理以最大限度地提高整个网络的性能。为了解决这一遥感设计问题,我们模拟了一个包含计算和通信延误的估算理论优化框架,并提出一个基于强化学习的方法,以动态方式分配每个传感器的计算资源。我们拟议方法的有效性通过数字模拟得到验证,而数字模拟则由无人机和自驾驶飞行器的互联网驱动进行案例研究。