Contrastive Language-Image Pre-training (CLIP) has demonstrated great potential in realizing open-vocabulary visual recognition in a matching style, due to its holistic use of natural language supervision that covers unconstrained real-world visual concepts. However, it is, in turn, also difficult to evaluate and analyze the openness of CLIP-like models, since they are in theory open to any vocabulary but the actual accuracy varies. To address the insufficiency of conventional studies on openness, we resort to an incremental perspective and define the extensibility, which essentially approximates the model's ability to deal with new visual concepts, by evaluating openness through vocabulary expansions. Our evaluation based on extensibility shows that CLIP-like models are hardly truly open and their performance degrades as the vocabulary expands to different degrees. Further analysis reveals that the over-estimation of openness is not because CLIP-like models fail to capture the general similarity of image and text features of novel visual concepts, but because of the confusion among competing text features, that is, they are not stable with respect to the vocabulary. In light of this, we propose to improve the openness of CLIP in the feature space by enforcing the distinguishability of text features. Our method retrieves relevant texts from the pre-training corpus to enhance prompts for inference, which boosts the extensibility and stability of CLIP even without fine-tuning.
翻译:由于全面使用包含不受限制的现实世界视觉概念的自然语言监督,培训前语言图象(CLIP)在以匹配的风格实现公开的视觉识别方面具有巨大潜力,然而,反过来,也很难评价和分析类似CLIP模式的开放性,因为从理论上讲,这些模式对任何词汇都开放,但实际准确性却各不相同。为了解决关于开放的传统研究的不足,我们采用一种渐进的观点,并界定了可扩展性,这基本上接近于模型处理新视觉概念的能力,通过词汇扩展评价开放性。我们基于扩展性的评估表明,CLIP类模式几乎不真正开放,其性能随着词汇的扩展而降低。进一步分析表明,过于估计开放性并不是因为CLIP类模型未能捕捉新视觉概念的图像和文字特征的一般相似性,而由于相互竞争的文本特征之间的混淆,因此,它们与词汇的精确性不相适应,因此它们与词汇的精确性不相适应。鉴于这一点,我们建议,CIP类比的模型的稳定性将提高CRIP的公开性,从而改进C级文本的精确性。