Adaptive sampling that exploits the spatiotemporal redundancy in videos is critical for always-on action recognition on wearable devices with limited computing and battery resources. The commonly used fixed sampling strategy is not context-aware and may under-sample the visual content, and thus adversely impacts both computation efficiency and accuracy. Inspired by the concepts of foveal vision and pre-attentive processing from the human visual perception mechanism, we introduce a novel adaptive spatiotemporal sampling scheme for efficient action recognition. Our system pre-scans the global scene context at low-resolution and decides to skip or request high-resolution features at salient regions for further processing. We validate the system on EPIC-KITCHENS and UCF-101 datasets for action recognition, and show that our proposed approach can greatly speed up inference with a tolerable loss of accuracy compared with those from state-of-the-art baselines.
翻译:利用视频中短暂时间冗余的适应性抽样,对于在计算机和电池资源有限的可磨损装置上始终行动识别至关重要。常用的固定抽样战略不是环境认知的,而且可能低估视觉内容,从而对计算效率和准确性产生不利影响。受人类视觉感知机制的软视和预感处理概念的启发,我们引入了一个创新的适应性随机随机抽样计划,以有效行动识别。我们的系统先期将全球场景环境扫描为低分辨率,并决定跳过或请求在突出区域提供高分辨率特征以便进一步处理。我们验证了EPIC-KITCHENS和UCF-101数据集的系统,以利行动识别,并表明我们所提议的方法可以大大加快推论,比最先进的基线的精确度损失可以令人容忍。