In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework based on the encoder-decoder architecture. OFA performs pretraining and finetuning with task instructions and introduces no extra task-specific layers for finetuning. Experimental results show that OFA achieves new state-of-the-arts on a series of multimodal tasks, including image captioning (COCO test CIDEr: 149.6), text-to-image generation (COCO test FID: 10.5), VQA (test-std acc.: 80.02), SNLI-VE (test acc.: 90.20), and referring expression comprehension (RefCOCO / RefCOCO+ / RefCOCOg test acc.: 92.93 / 90.10 / 85.20). Through extensive analyses, we demonstrate that OFA reaches comparable performance with uni-modal pretrained models (e.g., BERT, MAE, MoCo v3, SimCLR v2, etc.) in uni-modal tasks, including NLU, NLG, and image classification, and it effectively transfers to unseen tasks and domains. Code shall be released soon at http://github.com/OFA-Sys/OFA
翻译:在这项工作中,我们追求一种统一的多式联运预培训模式,以打破复杂任务/现代特制定制的支架。我们建议FOA, 一种统一的多式联运预培训模式,将模式(即交叉时装、视觉、语言)和任务(例如图像生成、视觉地貌、图像字幕、图像分类、文本生成等)统一成一个简单的从顺序到顺序的学习框架,以编码-编码结构为基础。OFA进行预先培训和根据任务指示进行微调,不为微调引入额外的任务分层。实验结果表明,OFA在一系列多式联运任务(即交叉时装饰、视觉、语言)和任务(例如:图像生成、视觉地貌、图像字幕字幕字幕字幕、图像分类等)到简单顺序到顺序的学习框架。VQA(测试 acc.:80.02), SNLI-VE(测试 acc.), Sloareal-L.OFA 和参考表达方式(RefCO/RefCO+/RefCO) 图像分析:90-CA preal-CO.