State-of-the-art computer vision models are mostly trained with supervised learning using human-labeled images, which limits their scalability due to the expensive annotation cost. While self-supervised representation learning has achieved impressive progress, it still requires a second stage of finetuning on labeled data. On the other hand, models pre-trained with large-scale text-image supervision (e.g., CLIP) have enabled zero-shot transfer to downstream image classification tasks. However, the zero-shot performance of CLIP-like models are often insufficient for real-world adoption. In this paper, we aim to leverage the abundant unlabeled data from a target domain to improve the performance of a pre-trained zero-shot classifier, by unsupervised finetuning of the pre-trained model. We propose Masked Unsupervised Self-Training (MUST), a new unsupervised adaptation method which leverages two different and complementary sources of training signals: pseudo-labels and raw images. MUST jointly optimizes three objectives to learn both class-level global feature and pixel-level local feature and enforces a regularization between the two. We demonstrate the efficacy of MUST on a variety of downstream tasks, where it improves upon CLIP by a large margin. MUST also outperforms supervised few-shot adaptation methods. It achieves a top-1 accuracy of 77.7% on ImageNet using ViT-B, +9.4% higher than CLIP, and +6.2% higher than 16-shot CLIP adaptation. Our code is available at https://github.com/salesforce/MUST.
翻译:最先进的计算机视觉模型大多通过使用人类标签图像进行监督性学习来培训,这限制了其可缩放性,因为注释成本昂贵。自我监督的代表模型学习取得了令人印象深刻的进展,但仍需要对标签数据进行第二阶段的微调。另一方面,经过大规模文本图像监督(例如CLIP)预先培训的模型使得零发传输到下游图像分类任务。然而,类似 CLIP 的模型的零发性能往往不足以用于真实世界的采用。在本文件中,我们的目标是利用目标域的大量非标签数据,通过未经监督的微调,改进经过培训的零发式分析器的性能。我们提出了一种新的未经监督的文本图像监督(例如CLIP), 利用两种不同的和互补的培训信号来源:假标签和原始图像。在本文中,我们的目标是从目标域中学习等级全球特性和Pixal-allial级的无标签数据,我们的目标是通过未经监督的升级的CLOST/Droformal 来改进一个MIP的MIP/MIST 。我们用两个地方特性/Droforstal化的MIST的功能和MIST的MIST 。</s>