Vision-language foundation models pretrained on large-scale data provide a powerful tool for many visual understanding tasks. Notably, many vision-language models build two encoders (visual and textual) that can map two modalities into the same embedding space. As a result, the learned representations achieve good zero-shot performance on tasks like image classification. However, when there are only a few examples per category, the potential of large vision-language models is often underperformed, mainly due to the gap between a large number of parameters and a relatively small amount of training data. This paper shows that we can significantly improve the performance of few-shot classification by using the category names to initialize the classification head. With the proposed category name initialization method, our model obtains the state-of-the-art performance on a number of few-shot image classification benchmarks (e.g., 87.37% on ImageNet and 96.08% on Stanford Cars, both using five-shot learning).
翻译:摘要: 在大规模数据上预训练的视觉语言基础模型为许多视觉理解任务提供了强大的工具。尤其是许多视觉语言模型建立了两个编码器(视觉和文本),可以将两个模态映射到相同的嵌入空间。因此,所学习的表示在零样本分类等任务上取得了良好的性能。然而,当每个类别只有很少的样例时,由于大量参数与较少的训练数据之间存在差距,大型视觉语言模型的潜力经常表现不佳。本文表明,我们可以通过使用类别名称来初始化分类头,从而显著提高少样本分类的性能。使用所提出的类别名称初始化方法,我们的模型在许多少样本图像分类基准测试中获得了最先进的性能(例如,在使用五样本学习的情况下,在ImageNet上为87.37%,在Stanford Cars上为96.08%)。