We present a simplified, task-agnostic multi-modal pre-training approach that can accept either video or text input, or both for a variety of end tasks. Existing pre-training are task-specific by adopting either a single cross-modal encoder that requires both modalities, limiting their use for retrieval-style end tasks or more complex multitask learning with two unimodal encoders, limiting early cross-modal fusion. We instead introduce new pretraining masking schemes that better mix across modalities (e.g. by forcing masks for text to predict the closest video embeddings) while also maintaining separability (e.g. unimodal predictions are sometimes required, without using all the input). Experimental results show strong performance across a wider range of tasks than any previous methods, often outperforming task-specific pre-training. Code is made available at https://github.com/pytorch/fairseq/examples/MMPT.
翻译:我们提出了一个简化的、任务不可知的多模式培训前方法,可以接受视频或文字输入,也可以同时接受各种最终任务。现有的培训前采用单一的跨模式编码器,这需要两种模式,限制它们用于检索式的最终任务,或者使用两个单一模式编码器进行更复杂的多任务学习,限制早期的跨模式融合。我们采用新的培训前掩蔽方案,更好地将各种模式混合起来(例如,强迫用文字遮罩来预测最接近的视频嵌入),同时保持分离性(例如,有时需要单式预测,而不使用所有输入)。实验结果显示,在比以往任何方法都更广泛的任务领域,往往比具体任务前培训工作都出色。守则可在https://github.com/pytorch/fairseq/examples/MMPT上查阅。