Real-time video analytics services aim to provide users with accurate recognition results timely. However, existing studies usually fall into the dilemma between reducing delay and improving accuracy. The edge computing scenario imposes strict transmission and computation resource constraints, making balancing these conflicting metrics under dynamic network conditions difficult. In this regard, we introduce the age of processed information (AoPI) concept, which quantifies the time elapsed since the generation of the latest accurately recognized frame. AoPI depicts the integrated impact of recognition accuracy, transmission, and computation efficiency. We derive closed-form expressions for AoPI under preemptive and non-preemptive computation scheduling policies w.r.t. the transmission/computation rate and recognition accuracy of video frames. We then investigate the joint problem of edge server selection, video configuration adaptation, and bandwidth/computation resource allocation to minimize the long-term average AoPI over all cameras. We propose an online method, i.e., Lyapunov-based block coordinate descent (LBCD), to solve the problem, which decouples the original problem into two subproblems to optimize the video configuration/resource allocation and edge server selection strategy separately. We prove that LBCD achieves asymptotically optimal performance. According to the testbed experiments and simulation results, LBCD reduces the average AoPI by up to 10.94X compared to state-of-the-art baselines.
翻译:暂无翻译