The extent to which text-only language models (LMs) learn to represent the physical, non-linguistic world is an open question. Prior work has shown that pretrained LMs can be taught to ``understand'' visual inputs when the models' parameters are updated on image captioning tasks. We test a stronger hypothesis: that the conceptual representations learned by text-only models are functionally equivalent (up to a linear transformation) to those learned by models trained on vision tasks. Specifically, we show that the image representations from vision models can be transferred as continuous prompts to frozen LMs by training only a single linear projection. Using these to prompt the LM achieves competitive performance on captioning and visual question answering tasks compared to models that tune both the image encoder and text decoder (such as the MAGMA model). We compare three image encoders with increasing amounts of linguistic supervision seen during pretraining: BEIT (no linguistic information), NF-ResNET (lexical category information), and CLIP (full natural language descriptions). We find that all three encoders perform equally well at transferring visual property information to the language model (e.g., whether an animal is large or small), but that image encoders pretrained with linguistic supervision more saliently encode category information (e.g., distinguishing hippo vs.\ elephant) and thus perform significantly better on benchmark language-and-vision tasks. Our results indicate that LMs encode conceptual information structurally similarly to vision-based models, even those that are solely trained on images.
翻译:仅文本语言模型(LMS)学会代表物理和非语言世界的程度是一个尚未解决的问题。 先前的工作表明, 预修LMS 能够被教给“ 了解” 图像说明任务上有关模型参数更新时的视觉输入 。 我们测试了一个更强的假设: 仅文本模型所学的概念表达方式在功能上等同于( 直至线性转换), 与经过视觉任务培训的模型所学的概念表达方式。 具体地说, 我们显示, 视觉模型的图像表达方式可以通过训练单线性投影, 仅作为持续提示, 即冻结的LMS 。 利用这些来促使LM 实现在标题和视觉问题回答任务方面的竞争性表现, 与同时调和图像编码和文本解码( 如MAGMA模型模型) 的模型相比。 我们比较了三种图像编码与在培训前所见的越来越多的语言监督数量: BEIT(没有语言信息)、NF-Resnet( 语言类别信息) 和 CLIP (完全自然语言描述 ) 。 我们发现, 所有三个摄像师在将视觉属性信息转移到语言模型上的视觉属性信息都同样精细, 但是, 也显示了语言模型。