Emerging mobile virtual reality (VR) systems will require to continuously perform complex computer vision tasks on ultra-high-resolution video frames through the execution of deep neural networks (DNNs)-based algorithms. Since state-of-the-art DNNs require computational power that is excessive for mobile devices, techniques based on wireless edge computing (WEC) have been recently proposed. However, existing WEC methods require the transmission and processing of a high amount of video data which may ultimately saturate the wireless link. In this paper, we propose a novel Sensing-Assisted Wireless Edge Computing (SAWEC) paradigm to address this issue. SAWEC leverages knowledge about the physical environment to reduce the end-to-end latency and overall computational burden by transmitting to the edge server only the relevant data for the delivery of the service. Our intuition is that the transmission of the portion of the video frames where there are no changes with respect to previous frames can be avoided. Specifically, we leverage wireless sensing techniques to estimate the location of objects in the environment and obtain insights about the environment dynamics. Hence, only the part of the frames where any environmental change is detected is transmitted and processed. We evaluated SAWEC by using a 10K 360$^{\circ}$ with a Wi-Fi 6 sensing system operating at 160 MHz and performing localization and tracking. We considered instance segmentation and object detection as benchmarking tasks for performance evaluation. We carried out experiments in an anechoic chamber and an entrance hall with two human subjects in six different setups. Experimental results show that SAWEC reduces both the channel occupation and end-to-end latency by more than 90% while improving the instance segmentation and object detection performance with respect to state-of-the-art WEC approaches.
翻译:暂无翻译