Many high-level skills that are required for computer vision tasks, such as parsing questions, comparing and contrasting semantics, and writing descriptions, are also required in other domains such as natural language processing. In this paper, we ask whether this makes it possible to learn those skills from text data and then use them to complete vision tasks without ever training on visual training data. Key to our approach is exploiting the joint embedding space of contrastively trained vision and language encoders. In practice, there can be systematic differences between embedding spaces for different modalities in contrastive models, and we analyze how these differences affect our approach and study a variety of strategies to mitigate this concern. We produce models using only text training data on three tasks: image captioning, visual entailment and visual question answering, and evaluate them on standard benchmarks using images. We find that this kind of transfer is possible and results in only a small drop in performance relative to models trained on images. We also showcase a variety of stylistic image captioning models that were trained using no image data and no human-curated language data, but instead text data from books, the web, or language models.
翻译:计算机视觉任务所需要的许多高级技能,例如分析问题、比较和对比语义学和写作描述,在自然语言处理等其他领域也需要这些技能。在本文中,我们问,这是否使从文本数据中学习这些技能成为可能,然后利用这些技能完成视觉任务,而无需接受视觉培训数据培训。我们的方法的关键是利用经过不同培训的视觉和语言编码器的联合嵌入空间。在实践中,为不同模式在对比模型中嵌入空间之间可能存在系统性差异,我们分析这些差异如何影响我们的方法,并研究各种战略来缓解这一关切。我们制作模型时仅使用三种任务的文字培训数据:图像说明、视觉要求和视觉问题回答,并用图像标准基准来评估这些技能。我们发现,这种转换是可能的,与所培训的图像模型相比,只能产生少量的性能下降。我们还展示了各种典型图像描述模型,这些模型没有使用图像数据,也没有人文化的语言数据,而是使用书籍、网络或语言模型的文字数据。