Contrastive vision-language models (e.g. CLIP) are typically created by updating all the parameters of a vision model and language model through contrastive training. Can such models be created by a small number of parameter updates to an already-trained language model and vision model? The literature describes techniques that can create vision-language models by updating a small number of parameters in a language model, but these require already aligned visual representations and are non-contrastive, hence unusable for latency-sensitive applications such as neural search. We explore the feasibility and benefits of parameter-efficient contrastive vision-language alignment through transfer learning: creating a model such as CLIP by minimally updating an already-trained vision and language model. We find that a minimal set of parameter updates ($<$7%) can achieve the same performance as full-model training, and updating specific components ($<$1% of parameters) can match 75% of full-model training. We describe a series of experiments: we show that existing knowledge is conserved more strongly in parameter-efficient training and that parameter-efficient scaling scales with model and dataset size. Where paired-image text data is scarce but strong multilingual language models exist (e.g. low resource languages), parameter-efficient training is even preferable to full-model training. Given a fixed compute budget, parameter-efficient training allows training larger models on the same hardware, achieving equivalent performance in less time. Parameter-efficient training hence constitutes an energy-efficient and effective training strategy for contrastive vision-language models that may be preferable to the full-model training paradigm for common use cases. Code and weights at https://github.com/codezakh/LilT.
翻译:摘要:对比视觉-语言模型(例如CLIP)通常通过对视觉模型和语言模型的所有参数进行对比训练来创建。能否通过对已经训练好的语言模型和视觉模型进行少量参数更新来创建这样的模型?文献描述了能够通过更新语言模型中的少量参数来创建视觉-语言模型的技术,但这些技术需要已经对齐的视觉表示,并且是非对比的,因此无法用于延迟敏感的应用程序,例如神经搜索。
我们通过迁移学习探索了使用参数高效对比视觉-语言对齐的可行性和好处:通过对已经训练好的视觉模型和语言模型进行最小化更新,创建像CLIP这样的模型。我们发现,一组最小的参数更新(<7%)可以实现与完整模型训练相同的性能,而更新特定组件(<1%的参数)可以匹配全模型训练的75%。我们描述了一系列实验:我们表明现有知识在参数高效训练中得到更强烈的保留,并且参数高效缩放随着模型和数据集大小而扩展。在成对的图像文本数据稀缺但强大的多语言语言模型存在的情况下(例如低资源语言),参数高效训练甚至优于全模型训练。在固定的计算预算下,参数高效训练允许在相同的硬件上训练更大的模型,在更短的时间内达到相同的性能。因此,参数高效训练构成了对比视觉-语言模型的一种节能有效的训练策略,可能比全模型训练范式更适用于常见用例。代码和权重位于https://github.com/codezakh/LilT。