Efficient video-language modeling should consider the computational cost because of a large, sometimes intractable, number of video frames. Parametric approaches such as the attention mechanism may not be ideal since its computational cost quadratically increases as the video length increases. Rather, previous studies have relied on offline feature extraction or frame sampling to represent the video efficiently, focusing on cross-modal modeling in short video clips. In this paper, we propose a semi-parametric video-grounded text generation model, SeViT, a novel perspective on scalable video-language modeling toward long untrimmed videos. Treating a video as an external data store, SeViT includes a non-parametric frame retriever to select a few query-relevant frames from the data store for a given query and a parametric generator to effectively aggregate the frames with the query via late fusion methods. Experimental results demonstrate our method has a significant advantage in longer videos and causal video understanding. Moreover, our model achieves the new state of the art on four video-language datasets, iVQA (+4.8), Next-QA (+6.9), and Activitynet-QA (+4.8) in accuracy, and MSRVTT-Caption (+3.6) in CIDEr.
翻译:高效的视频建模应考虑由于大量、有时难以解决的视频框架而导致的计算成本。关注机制等参数方法可能并不理想,因为其计算成本随着视频长度的增加而增加。相反,以前的研究依靠离线地物提取或框架取样来高效地代表视频,重点是在短视频剪辑中进行跨模式建模。在本文中,我们提出了一个半参数视频基文本生成模型,即SeVIT,这是一个关于可缩放的视频语言建模向长期未剪辑视频的新视角。将视频作为外部数据存储处理,SeViT包括一个非参数框架检索器,从数据存储中为特定查询选择几个与查询有关的框架,以及一个参数生成器,以便通过迟发配法有效地将框架与询问相匹配。实验结果表明,我们的方法在较长的视频和因果视频理解方面具有重大优势。此外,我们的模型在4个视频数据集、iVQA(+4.8)、Next-QA(+6.9)和AFIDA-TA(C3.6)和M3.6-TA/CFDR-Q的精确度上实现了。