Contrastive Language-Image Pre-training (CLIP) has demonstrated great potential in realizing open-vocabulary image classification in a matching style, because of its holistic use of natural language supervision that covers unconstrained real-world visual concepts. However, it is, in turn, also difficult to evaluate and analyze the openness of CLIP-like models, since they are in theory open to any vocabulary but the actual accuracy varies. To address the insufficiency of conventional studies on openness, we resort to an incremental view and define the extensibility, which essentially approximates the model's ability to deal with new visual concepts, by evaluating openness through vocabulary expansions. Our evaluation based on extensibility shows that CLIP-like models are hardly truly open and their performances degrade as the vocabulary expands to different degrees. Further analysis reveals that the over-estimation of openness is not because CLIP-like models fail to capture the general similarity of image and text features of novel visual concepts, but because of the confusion among competing text features, that is, they are not stable with respect to the vocabulary. In light of this, we propose to improve the openness of CLIP from the perspective of feature space by enforcing the distinguishability of text features. Our method retrieves relevant texts from the pre-training corpus to enhance prompts for inference, which boosts the extensibility and stability of CLIP even without fine-tuning.
翻译:培训前语言和词汇的对比性显示,由于全面使用自然语言监督,涵盖不受限制的现实世界视觉概念,因此在以匹配的方式实现开放的词汇图像分类方面具有巨大潜力;然而,反过来,由于CLIP类似的模型在理论上对任何词汇开放,但实际准确性却各不相同,因此也很难评估和分析其开放性,因为从理论上讲,这些模型对任何词汇开放性都开放,但实际准确性却各不相同;为了解决关于开放性的传统研究的不足,我们采用一种渐进式观点,并界定了可扩展性,这基本上接近于模型处理新视觉概念的能力,通过词汇扩展来评估开放性。我们基于扩展性的评估表明,CLIP类模型几乎没有真正开放性,其性能也随着词汇扩展到不同程度而下降。进一步的分析表明,过度估计开放性并不是因为CLIP类模型未能捕捉到新视觉概念的形象和文本的一般相似性,但是由于相互竞争的文字特征之间的混淆,因此,它们与词汇的准确性是不稳定的。鉴于这一点,我们建议CIP类比,从提高C级文本的开放性,从C级到C级的精确性。