Video-and-language understanding has a variety of applications in the industry, such as video question answering, text-video retrieval and multi-label classification. Existing video-and-language understanding methods generally adopt heavy multi-modal encoders and feature fusion modules, which consume large amounts of GPU memory. Especially, they have difficulty dealing with dense video frames or long text that are prevalent in industrial applications. In this paper, we propose MuLTI, a highly accurate and memory-efficient video-and-language understanding model that achieves efficient and effective feature fusion through feature sampling and attention modules. Therefore, MuLTI can handle longer sequences with limited GPU memory. Then, we introduce an attention-based adapter to the encoders, which finetunes the shallow features to improve the model's performance with low GPU memory consumption. Finally, to further improve the model's performance, we introduce a new pretraining task named Multiple Choice Modeling to bridge the task gap between pretraining and downstream tasks and enhance the model's ability to align the video and the text. Benefiting from the efficient feature fusion module, the attention-based adapter and the new pretraining task, MuLTI achieves state-of-the-art performance on multiple datasets. Implementation and pretrained models will be released.
翻译:视频和语言理解在行业中有着多种应用,如视频问答、文本视频检索和多标签分类等。现有的视频和语言理解方法通常采用耗用大量GPU内存的重多式编码器和元集模块。特别是,它们难以处理工业应用中普遍存在的密集视频框架或长文本。在本文中,我们建议使用一个高度准确和记忆高效的视频和语言理解模型,通过特征抽样和关注模块实现高效和有效特征融合。因此, MuLTI可以处理较长序列的长序列,但GPU记忆有限。然后,我们引入一个关注型对编码器的适应器,以微小的GPU记忆消耗量来微化模型的性能。最后,为了进一步改进模型的性能,我们引入了名为“多重选择模型”的新的培训前任务,以弥合培训前和下游任务之间的任务差距,并提高模型调合影视和文本的能力。从高效的特征融合模块、关注型调整器和新的州前任务执行前阶段将实现。</s>