This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image models with unlocked text models work best. We call this instance of contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model to read out good representations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsupervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 85.2% zero-shot transfer accuracy on the ImageNet test set, and 82.5% on the challenging out-of-distribution ObjectNet test set.
翻译:本文展示了对比性调整, 这是一种简单的方法, 使用对比性培训来调整图像和文本模型, 同时仍在利用其培训前的优势。 在我们的实验研究中, 我们发现, 锁定的预培训图像模型与解开的文本模型最有效。 我们称之为“ Locked- image Tuning” (LiT) 的对比性调整实例, 它只是教给一个文本模型, 用来读出对新任务进行预培训前图像模型的良好表现。 一个 LiT 模型获得将零发传输到图像分类或检索等新视觉任务的能力。 拟议的 LIT 广泛应用; 它使用多种培训前方法( 监控和非监管的) 以及使用三种不同的图像文本数据集( ResNet、 愿景转换器和 MLP- Mixer ) 进行可靠的工作。 使用基于变压器的预培训Vit- g/14 模型, LIT 模型在图像网络测试中实现了85.2%的零发传输准确性传输准确性, 在具有挑战性的外部物体网络测试集上实现了82.5% 。