Contrastive learning has been widely used to train transformer-based vision-language models for video-text alignment and multi-modal representation learning. This paper presents a new algorithm called Token-Aware Cascade contrastive learning (TACo) that improves contrastive learning using two novel techniques. The first is the token-aware contrastive loss which is computed by taking into account the syntactic classes of words. This is motivated by the observation that for a video-text pair, the content words in the text, such as nouns and verbs, are more likely to be aligned with the visual contents in the video than the function words. Second, a cascade sampling method is applied to generate a small set of hard negative examples for efficient loss estimation for multi-modal fusion layers. To validate the effectiveness of TACo, in our experiments we finetune pretrained models for a set of downstream tasks including text-video retrieval (YouCook2, MSR-VTT and ActivityNet), video action step localization (CrossTask), video action segmentation (COIN). The results show that our models attain consistent improvements across different experimental settings over previous methods, setting new state-of-the-art on three public text-video retrieval benchmarks of YouCook2, MSR-VTT and ActivityNet.
翻译:为了培养基于变压器的视频文本校正和多模式代表性学习的视觉语言模型,广泛使用了反向学习。本文件展示了一种名为Token-Aware Cascade 对比学习(TACO)的新算法,它用两种新技巧改进了对比学习。第一个是象征性的辨别性损失,它通过考虑到综合词汇类别来计算。它的动机是观察到,对于视频文本配对而言,文本中的内容词,如名词和动词,比功能词更可能与视频中的视觉内容一致。第二,采用级级抽样方法来产生一小组关于多模式融合层有效损失估算的硬性实例。为了验证TACo的有效性,我们在实验中我们微调了一套下游任务的预先培训模式,包括文本视频检索(YouCook2,MSR-VTTT和ActionNet)、视频行动级定位(CTAS-TAD)、新的视频动作分割(COIN)。结果显示,我们的模型在不同的实验设置中实现了对以往三个州级SLTF基准的不断改进。