We consider the problem of localizing a spatio-temporal tube in a video corresponding to a given text query. This is a challenging task that requires the joint and efficient modeling of temporal, spatial and multi-modal interactions. To address this task, we propose TubeDETR, a transformer-based architecture inspired by the recent success of such models for text-conditioned object detection. Our model notably includes: (i) an efficient video and text encoder that models spatial multi-modal interactions over sparsely sampled frames and (ii) a space-time decoder that jointly performs spatio-temporal localization. We demonstrate the advantage of our proposed components through an extensive ablation study. We also evaluate our full approach on the spatio-temporal video grounding task and demonstrate improvements over the state of the art on the challenging VidSTG and HC-STVG benchmarks. Code and trained models are publicly available at https://antoyang.github.io/tubedetr.html.
翻译:我们考虑的是将时空空间管在与某一文本查询相对应的视频中本地化的问题。这是一项艰巨的任务,需要共同和高效地模拟时间、空间和多模式互动。为了完成这项任务,我们提议了TubeDETR,这是一个基于变压器的架构,其灵感来自这些文本附加条件的物体探测模型最近的成功。我们的模型主要包括:(一)一个高效的视频和文本编码器,它能模拟空间多模式互动,而不是稀少的抽样框架;(二)一个空间-时间解码器,共同进行时空本地化。我们通过广泛的通缩研究,展示了我们拟议组件的优势。我们还评估了我们在空间-时空视频定位任务上的全面方法,并展示了对挑战性VidSTG和HC-STVG基准的艺术状况的改进。代码和经过培训的模型可在https://antoyang.github.io/tubedetr.html上公开查阅。