Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations also set new state-of-the-art results on Flickr30K and MSCOCO benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
翻译:培训前的表述方式对于许多国家定位平台和认知任务都至关重要。虽然国家定位平台的表述方式已经转变为没有人文说明的原始文本培训,但视觉和视觉语言的表述方式仍然严重依赖成本昂贵或需要专家知识的成熟培训数据集。对于愿景应用而言,大部分表述方式都是使用带有明确类别标签的数据集学习,如图像网络或OpenIMages等。对于愿景语言,流行的数据集,如概念说明、MCCOCO或CLIP等,都涉及非三重数据收集(和清洁)过程。这一费用高昂的翻译过程限制了数据集的规模,从而阻碍了经过培训的模型的扩展。在本文件中,我们利用了10亿多张图像变异配对的噪音数据集,而没有昂贵的过滤或后处理步骤,如图像网络或Openimationalimations数据集等。一个简单的双重编码结构学会了将图像和文本的图像和文本表述方式与对比式模型结合起来。我们展示的搜索规模可以使其产生强烈的噪音,甚至导致将高端图像转换为州级的图像显示方式,即使将图像显示方式转化为图像格式的图像结构也实现了。