Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at https://github.com/baaivision/EVA/tree/master/EVA-CLIP.
翻译:对比语言-图像预训练(CLIP)因其在各种方案中的潜力而受到越来越多的关注。在本文中,我们提出了 EVA-CLIP,一系列可以显著提高 CLIP 训练效率和效果的模型。我们的方法包括新的表示学习、优化和增强技术,使得 EVA-CLIP 可以在与先前的 CLIP 模型相同的参数数量但显著更小的训练成本下实现更优异的性能。值得注意的是,我们最大的 5.0B-参数 EVA-02-CLIP-E/14+,只使用 90 亿次的可见样本,在 ImageNet-1K val 上实现了 82.0% 的零样本 top-1 准确度。一个更小的 EVA-02-CLIP-L/14+,只有 4.3 亿个参数和 60 亿个可见样本,在 ImageNet-1K val 上实现了 80.4% 的零样本 top-1 准确度。为了促进开放获取和开放研究,我们将完整的 EVA-CLIP 套件发布到 https://github.com/baaivision/EVA/tree/master/EVA-CLIP,供社区使用。