Self-supervision has shown outstanding results for natural language processing, and more recently, for image recognition. Simultaneously, vision transformers and its variants have emerged as a promising and scalable alternative to convolutions on various computer vision tasks. In this paper, we are the first to question if self-supervised vision transformers (SSL-ViTs) can be adapted to two important computer vision tasks in the low-label, high-data regime: few-shot image classification and zero-shot image retrieval. The motivation is to reduce the number of manual annotations required to train a visual embedder, and to produce generalizable and semantically meaningful embeddings. For few-shot image classification we train SSL-ViTs without any supervision, on external data, and use this trained embedder to adapt quickly to novel classes with limited number of labels. For zero-shot image retrieval, we use SSL-ViTs pre-trained on a large dataset without any labels and fine-tune them with several metric learning objectives. Our self-supervised attention representations outperforms the state-of-the-art on several public benchmarks for both tasks, namely miniImageNet and CUB200 for few-shot image classification by up-to 6%-10%, and Stanford Online Products, Cars196 and CUB200 for zero-shot image retrieval by up-to 4%-11%. Code is available at \url{https://github.com/AutoVision-cloud/SSL-ViT-lowlabel-highdata}.
翻译:在自然语言处理中, 以及最近, 图像识别, 自我监督显示自然语言处理的杰出结果 。 与此同时, 视觉变异器及其变异器已经出现, 成为各种计算机视觉任务演化的有希望且可扩展的替代方案 。 在本文中, 我们首先要质疑的是, 自我监督的视觉变异器( SSL- ViTs) 是否能适应低标签高数据系统中的两项重要的计算机视觉任务 : 少发图像分类和零发图像检索 。 目的是减少培训视觉嵌入器所需的手动说明数量, 并制作通用和精致意义的嵌入式嵌入器 。 对于少数标签的自我监督的图像分类, 我们用经过训练的嵌入器快速适应新课程 。 对于低标签的图像检索, 我们使用 SS- ViT 预先训练大型数据集, 没有标签, 和微调图像检索目标 。 我们自我监督的注意力演示演示演示显示, 超过 ASL- IMO- IMO- IMO- 的 CL- IMO- IMO- IMO- IMO- IMO- IMUB- 图像分类, 4IMO- IMO- 4- 4- 4- IMO- 和IMO- IM- IMO- IMO- IMO- IMO- IM- IM- IM- IM- IM- IMO- 4- IMO- IMO- IM- IM- 4- IM- 4- IMO- IMO- IMO- IMO- 和IML- IMO- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- 4- IM- IM- IMO- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM-