Recent advancements in array-camera videography enable real-time capturing of ultra-high-definition (Ultra-HD) videos, providing rich visual information in a large field of view. However, promptly processing such data using state-of-the-art transformer-based vision foundation models faces significant computational overhead in on-device computing or transmission overhead in cloud computing. In this paper, we present Hyperion, the first cloud-device collaborative framework that enables low-latency inference on Ultra-HD vision data using off-the-shelf vision transformers over dynamic networks. Hyperion addresses the computational and transmission bottleneck of Ultra-HD vision transformers by exploiting the intrinsic property in vision Transformer models. Specifically, Hyperion integrates a collaboration-aware importance scorer that identifies critical regions at the patch level, a dynamic scheduler that adaptively adjusts patch transmission quality to balance latency and accuracy under dynamic network conditions, and a weighted ensembler that fuses edge and cloud results to improve accuracy. Experimental results demonstrate that Hyperion enhances frame processing rate by up to 1.61 times and improves the accuracy by up to 20.2% when compared with state-of-the-art baselines under various network environments.
翻译:近年来,阵列相机摄像技术的进步使得实时捕获超高清视频成为可能,从而在宽广视场中提供丰富的视觉信息。然而,利用最先进的基于Transformer的视觉基础模型及时处理此类数据,在设备端计算中面临显著的计算开销,或在云计算中面临巨大的传输开销。本文提出Hyperion,这是首个云-端协同框架,能够在动态网络环境下使用现成的视觉Transformer对超高清视觉数据进行低延迟推理。Hyperion通过利用视觉Transformer模型的内在特性,解决了超高清视觉Transformer的计算与传输瓶颈。具体而言,Hyperion集成了一个协同感知的重要性评分器,用于在图像块级别识别关键区域;一个动态调度器,可根据动态网络条件自适应调整图像块传输质量以平衡延迟与精度;以及一个加权集成器,用于融合边缘与云端结果以提升精度。实验结果表明,在各种网络环境下,与最先进的基线方法相比,Hyperion将帧处理速率最高提升至1.61倍,并将精度最高提升20.2%。