Although massive pre-trained vision-language models like CLIP show impressive generalization capabilities for many tasks, still it often remains necessary to fine-tune them for improved performance on specific datasets. When doing so, it is desirable that updating the model is fast and that the model does not lose its capabilities on data outside of the dataset, as is often the case with classical fine-tuning approaches. In this work we suggest a lightweight adapter, that only updates the models predictions close to seen datapoints. We demonstrate the effectiveness and speed of this relatively simple approach in the context of few-shot learning, where our results both on classes seen and unseen during training are comparable with or improve on the state of the art.
翻译:尽管如CLIP这样的大规模经过培训的先期愿景语言模型在很多任务上表现出令人印象深刻的概括能力,但仍然经常有必要微调这些模型,以提高具体数据集的性能。在这样做的时候,最好能够快速更新模型,而且该模型不会像传统微调方法那样丧失其在数据集之外的数据能力。在这项工作中,我们建议一个轻量级的调整器,它只更新接近于已看到的数据点的模型预测。我们显示了这种相对简单的方法在微小的学习中的有效性和速度,在这种学习中,我们在所见和在培训期间所看不见的班级上的结果可以与艺术水平相比或改进。