Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip
翻译:增强神经网络导致在一系列广泛任务中取得显著的成绩。此外,绩效往往遵循可靠的衡量法律,作为培训设置规模、模型规模和计算功能的一种功能,提供宝贵的指导,因为大规模实验越来越昂贵。然而,以往的衡量法律工作主要使用私人数据 模型 模型 或侧重于单式语言或视觉学习。为克服这些限制,我们调查与公共LAION数据集和开放源代码 Open-Form CLIP 库相比的对比语言模拟培训前(CLIP)的衡量法律 。我们的大规模实验包括培训多达20亿对图像文本的模型,以及确定多个下游任务的权力法规模,包括零发分类、检索、线性勘测和端到端微调。我们发现,培训分布在衡量法律规模方面发挥着关键作用,因为OpenAI 和 OpenCLIP 模型尽管有相同的模型和类似的培训配方,却表现出不同的衡量行为。我们公开提供我们的评价工作流程和所有模型,包括最大的公共CLIP 模型,以确保可重复性和使法律研究更容易获得。