In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey on VLP. We hope that this survey can shed light on future research in the VLP field.
翻译:过去几年来,培训前模式的出现使计算机视野和自然语言处理等单式领域进入了一个新时代,大量工作表明,它们有益于下游单式任务,避免从零开始培训新模式。因此,这种预先培训模式能够适用于多式任务吗?研究人员已经探讨了这一问题并取得了显著进展。本文调查了包括图像文字和视频文字培训前培训在内的视觉语言培训前(VLP)的最近进展和新疆界。为了使读者能够更全面地掌握VLP,我们首先从五个方面审查最近的进展:特征提取、模型结构、培训前目标、培训前数据集和下游任务。然后,我们详细概述具体的VLP模式。最后,我们讨论VLP的新疆域。 据我们所知,这是关于VLP的第一次调查。我们希望,这次调查能够揭示VLP领域未来的研究。