The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.
翻译:过去几年在视频和语言(VidUL)理解方面取得了显著进展。然而,大多数现代VidL方法使用复杂和专业化的模型架构和复杂的预培训协议,使得这些框架难以再复制、分析和比较。因此,本文没有提出另一个新的VidL模式,而是进行了彻底的经验研究,以解析VidL模式设计中最重要的因素。我们调查的因素包括:(一) Spotote-时间结构设计,(二) Mddal 混合计划,(三) 培训前目标,(四) 预培训数据的选择,(五) 预培训和微调协议,以及(六) 数据设置和模型缩放。我们的经验研究表明,最重要的设计因素包括:时间建模、视频到文本的混合、蒙蔽的建模目标以及图像和视频的联合培训。我们利用这些经验,然后开发一个渐进式的配方,调的MddLU,用于有效的 VidL前训练。我们用目前制的Vi-TO-TA模型, 也用我们现在的Vi-T-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-ald-al-al-al-al-al-al-al-alg-al-al-al-al-al-al-al-alg-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-