With the rapid development of intelligent transportation system applications, a tremendous amount of multi-view video data has emerged to enhance vehicle perception. However, performing video analytics efficiently by exploiting the spatial-temporal redundancy from video data remains challenging. Accordingly, we propose a novel traffic-related framework named CEVAS to achieve efficient object detection using multi-view video data. Briefly, a fine-grained input filtering policy is introduced to produce a reasonable region of interest from the captured images. Also, we design a sharing object manager to manage the information of objects with spatial redundancy and share their results with other vehicles. We further derive a content-aware model selection policy to select detection methods adaptively. Experimental results show that our framework significantly reduces response latency while achieving the same detection accuracy as the state-of-the-art methods.
翻译:随着智能运输系统应用的迅速发展,出现了大量多视视频数据,以提高车辆的认知度;然而,通过利用视频数据的空间-时空冗余来高效进行视频分析仍然具有挑战性;因此,我们提议建立一个名为CEVAS的新的交通相关框架,以利用多视视频数据实现高效的物体探测;简而言之,引入了细微的输入过滤政策,以便从所摄图像中产生一个合理感兴趣的区域;此外,我们设计了一个共享对象管理器,以管理空间冗余物体的信息,并与其他飞行器分享其结果;我们进一步推出一个内容认知模型选择政策,以适应性地选择探测方法;实验结果显示,我们的框架大大降低了反应延迟度,同时实现了与最新方法相同的检测精确度。