The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. However, while the publicly available CLIP models are mostly pretrained on English data, it is hard to search for a CLIP pretrained on Chinese data. We assume that pretraining a Chinese CLIP is essential to research and industry for the following reasons. First, it can benefit the vision-language retrieval in Chinese and thus promote the language-specific multimodal representation learning. Second, the distribution of images in Chinese websites should be different from that of images in English websites. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Furthermore, through the ablation study we show that the two-stage pretraining method is the most effective compared with the other options. We release our code in https://github.com/OFA-Sys/Chinese-CLIP
翻译:CLIP(Radford et al., 2021)的巨大成功促进了视觉语言预培训的对比性学习的研究和应用;然而,虽然公开的CLIP模型大多在英语数据方面已经预先培训,但很难在中国数据方面事先寻找CLIP模型。我们假定,由于以下原因,中国的CLIP预培训对于研究和工业至关重要。首先,它可以有利于中文的视觉语言检索,从而促进语言特有的多式代表性学习。第二,中国网站上的图像传播应当不同于英语网站的图像。在这项工作中,我们建造了中国图像文本配对的大规模数据集,大部分数据都是从公开的数据集中检索的,而我们在新数据集上则对中国的CLIP模型进行预选。我们开发了5个多尺寸的中国CLIP模型,范围从7 700万到9.58万个参数。此外,我们提出了两阶段的预培训方法,首先通过图像编码冷冻,然后经过所有参数的CNRC-N,然后用最精细的参数来优化中国的图像配制成一个高级的C-K-LO-LO-IP 工具,在新的模型上,我们的全面业绩测试中可以实现基于模型的20- GE- GE- GE-C- RFIFR的模型的模型的模型的模型的模型的模型,我们的全面测试。我们的全面测试,在以在中国的模型的模型的模型的模型的模型的模型的模型的模型的模型的升级的升级的升级的升级的升级的升级的升级的升级的模型,在学习方法是用来在中国的升级的升级的升级的升级的模型,在中国的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级,在在在在中国的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级的升级