We present a simplified, task-agnostic multi-modal pre-training approach that can accept either video or text input, or both for a variety of end tasks. Existing pre-training are task-specific by adopting either a single cross-modal encoder that requires both modalities, limiting their use for retrieval-style end tasks or more complex multitask learning with two unimodal encoders, limiting early cross-modal fusion. We instead introduce new pretraining masking schemes that better mix across modalities (e.g. by forcing masks for text to predict the closest video embeddings) while also maintaining separability (e.g. unimodal predictions are sometimes required, without using all the input). Experimental results show strong performance across a wider range of tasks than any previous methods, often outperforming task-specific pre-training.
翻译:我们提出了一个简化的、任务不可知的多模式培训前做法,它可以接受视频或文字输入,或者同时接受各种最终任务。现有的培训前做法是任务性强的,要么采用单一的跨模式编码器,需要两种模式,限制它们用于检索式的最终任务,要么使用两个单一模式编码器进行更为复杂的多任务学习,限制早期的跨模式融合。我们采用新的培训前掩蔽方案,更好地将各种模式混合起来(例如,强迫文字遮罩预测最接近的视频嵌入),同时保持分离性(例如,有时需要单式预测,而无需使用所有投入 ) 。 实验结果显示,在比以往任何方法都更为广泛的一系列任务中,往往比具体任务前培训都出色。