The extent to which text-only language models (LMs) learn to represent features of the non-linguistic world is an open question. Prior work has shown that pretrained LMs can be taught to caption images when a vision model's parameters are optimized to encode images in the language space. We test a stronger hypothesis: that the conceptual representations learned by frozen text-only models and vision-only models are similar enough that this can be achieved with a linear map. We show that the image representations from vision models can be transferred as continuous prompts to frozen LMs by training only a single linear projection. Using these to prompt the LM achieves competitive performance on captioning and visual question answering tasks compared to models that tune both the image encoder and text decoder (such as the MAGMA model). We compare three image encoders with increasing amounts of linguistic supervision seen during pretraining: BEIT (no linguistic information), NF-ResNET (lexical category information), and CLIP (full natural language descriptions). We find that all three encoders perform equally well at transferring visual property information to the language model (e.g., whether an animal is large or small), but that image encoders pretrained with linguistic supervision more saliently encode category information (e.g., distinguishing hippo vs. elephant) and thus perform significantly better on benchmark language-and-vision tasks. Our results indicate that LMs encode conceptual information structurally similarly to vision-based models, even those that are solely trained on images. Code is available here: https://github.com/jmerullo/limber
翻译:只有文本的语言模型(LMS)学会代表非语言世界特征的程度是一个尚未解决的问题。 先前的工作表明, 在优化视觉模型参数以在语言空间对图像进行编码时, 预受过培训的LMS可以被教给图像说明, 以优化图像模型参数, 以优化语言空间的图像编码。 我们测试了一个更强的假设: 冷冻的文本模型和只愿景模型所学的概念表达方式非常相似, 从而可以通过线性地图实现这一点。 我们显示, 视觉模型的图像表达方式可以通过训练单线性投影, 以持续的速度传递冻结的LMS。 使用这些来促使LM公司在描述和视觉问题回答任务上实现竞争性的性能, 与调校正图像编码和文本解码的模型( 如MAGMA模型)相比。 我们比较了三种图像编码与在培训前所见的越来越多的语言监督数量: BEIT( 语言信息)、 NF-Resnet( 语言类信息) 和 CLIP( 完全自然语言描述 ) 。 我们发现, 所有三个显示者在将视觉属性信息转换到语言模型上的表现都相当良好,, 但是在语言模型模型上(eg- encregrecial decodeal lader) view view view) 和 viewal viewal view views (eal views) view views) views) (eglegleglegleglegle) (e.</s>