Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored. In this work, we propose STOA-VLP, a pre-training framework that jointly models object and action information across spatial and temporal dimensions. More specifically, the model regards object trajectories across frames and multiple action features from the video as fine-grained features. Besides, We design two auxiliary tasks to better incorporate both kinds of information into the pre-training process of the video-language model. The first is the dynamic object-text alignment task, which builds a better connection between object trajectories and the relevant noun tokens. The second is the spatial-temporal action set prediction, which guides the model to generate consistent action features by predicting actions found in the text. Extensive experiments on three downstream tasks (video captioning, text-video retrieval, and video question answering) demonstrate the effectiveness of our proposed STOA-VLP (e.g. 3.7 Rouge-L improvements on MSR-VTT video captioning benchmark, 2.9% accuracy improvements on MSVD video question answering benchmark, compared to previous approaches).
翻译:虽然大规模视频语言预培训模式通常在视频和文字之间建立全球统一,但在各种下游任务方面取得了显著进展,但在培训前阶段采用细微信息的想法没有得到很好地探讨。在这项工作中,我们提议STOA-VLP,这是一个培训前框架,共同模拟空间和时间方面的目标和行动信息。更具体地说,该模式将视频的物体轨道和多重动作特征作为细微的特征,作为跨框架和多个动作特征。此外,我们设计了两项辅助任务,以便更好地将两种信息纳入视频语言模式的培训前进程。第一个是动态对象-文本协调任务,在对象轨迹和相关名牌之间建立更好的联系。第二个是空间-时间行动组合预测,通过预测文本中的行动来指导产生一致行动特征的模式。关于三项下游任务(视频字幕、文字-视频检索和视频解答)的广泛实验,展示了我们拟议的STOA-VP-VLM改进方法的有效性。关于SBS-VBS-BRRRRRRRBRBRRRRRRRRRRBRBRBIGS-3.7-GRITGRVGRVRVRVGRVRBIGRVBIGRBRBRBRBRBIGRBRBRBIGRBRBIGRBRBRBRBRBRBIGRIGRBRBRBRBRBRBRBIGRM3.3.3.3.3.3.3.3.3.3.3.3.3.3.GRBIGRBIGRBIGRBRBRBIGRBIGRBIGRBIGRBIGIGRBIG3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.L3-RBIGIGIGIGIGIV3.3.3.3.3.3.3.3.3.3.3.3.3.GIGIGIGIV3.3.3.3.3.3.3.3.3.3.3.3.L3-RBIBIBIGIGIGIGIGIGIGIGIBIBIGIGIGIGIGIGIGIGIGIG