With the urgent demand for generalized deep models, many pre-trained big models are proposed, such as BERT, ViT, GPT, etc. Inspired by the success of these models in single domains (like computer vision and natural language processing), the multi-modal pre-trained big models have also drawn more and more attention in recent years. In this work, we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works. Specifically, we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning, pre-training works in natural language process, computer vision, and speech. Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM-PTMs), and discuss the MM-PTMs with a focus on data, objectives, network architectures, and knowledge enhanced pre-training. After that, we introduce the downstream tasks used for the validation of large-scale MM-PTMs, including generative, classification, and regression tasks. We also give visualization and analysis of the model parameters and results on representative downstream tasks. Finally, we point out possible research directions for this topic that may benefit future works. In addition, we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models: https://github.com/wangxiao5791509/MultiModal_BigModels_Survey
翻译:由于对通用深层模型的迫切需要,提出了许多经过预先培训的大型模型,如BERT、ViT、GPT等。这些模型在单一领域(如计算机愿景和自然语言处理)的成功激励下,多模式先培训的大型模型近年来也日益引起注意。在这项工作中,我们对这些模型进行全面调查,希望本文件能够提供新的见解,帮助新的研究人员跟踪最尖端的工程。具体地说,我们首先通过审查传统的深层次学习、自然语言过程、计算机视觉和语言演讲的预培训工作,介绍多模式前培训模型的背景。然后,我们介绍多模式前培训模型的任务定义、关键挑战和优势(MM-PTMs),并讨论MM-PTM模型,重点是数据、目标、网络架构和知识强化的预培训。随后,我们介绍了用于验证大型MMMM-M-MTMM的下游模型,包括基因化、分类和回归任务。我们还对模型参数的可视化和分析,并分析未来具有代表性的多层次研究方向。最后,我们可能保留一个大层次的图段。