Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.
翻译:人类语言以多式知识为基础,包括色彩、大小和形状等视觉知识。然而,目前大规模预先培训的大型语言模型依赖只有文本的自我监督培训,而大量文本数据则使他们无法在必要时利用相关的视觉信息。为此,我们提议了一个名为VaLM的新的培训前框架,将其改成具有为语言建模检索相关图像的视觉增强文本符号。具体地说,VaLM利用图像检索模块,获取具有文字增强背景的对应图像。在视觉增强的背景下,VaLM使用视觉知识聚合层,通过关注文本背景和图像中的视觉知识,使多式基于语言建模。我们评估了各种视觉知识密集型常识推理任务中的VaLM,这需要视觉信息。实验结果显示,VaLM超越了所有强的仅使用语言和视觉语言的基线,在推理对象常识(包括颜色、大小和形状)方面大增益。我们的代码可在 https://github.com/Victorwz/VA中查阅。</s>