In this work, we introduce a novel task - Humancentric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatiotemporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security-related applications, where the surveillance videos can be extremely long but only a specific person during a specific period of time is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG dataset consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating the newly-proposed method outperforms the existing baseline methods.
翻译:在这项工作中,我们引入了一项新颖的任务 -- -- 以人为中心的Spatio-Toporal View Production(HC-STVG) -- -- 与现有的图像或视频中的参考表达任务不同的是,HC-STVG(HC-STVG)通过侧重于人文或视频,旨在根据给定的素描描述,将目标人的无线时空视频管子本地化。这一任务非常有用,特别是用于保健和安全相关应用,因为监视视频在特定时期内可能非常长,但只涉及某个特定的人。HC-STVG(HC-STVG)是一个视频定位任务,需要空间(地点)和时间(时间)本地化。不幸的是,现有的地面方法无法很好地处理这项任务。我们通过提出一种名为Spatio-Topormodual Lowings(Spatio-Toporal Groductions)的有效基线方法来应对这项任务。我们用视觉变换器提取跨式的跨式展示视频和时间定位图象组现有20秒的图像。