Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on the text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VaLM builds on a novel text-vision alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending on both text context and visual knowledge in images. We evaluate the proposed model on various multimodal commonsense reasoning tasks, which require visual information to excel. VaLM outperforms the text-only baseline with substantial gains of +8.66% and +37.81% accuracy on object color and size reasoning, respectively.
翻译:人类语言以多式知识为基础,包括色彩、大小和形状等视觉知识。然而,目前大规模预先培训的大型语言模型依赖于只有文本的自我监督培训,使用大量文本数据,从而无法在必要时利用相关的视觉信息。为此,我们提议了一个名为VaLM的新的培训前框架,用为语言建模检索到的有关图像进行视觉增强的文字标志。具体地说,VaLM利用一个图像检索模块,通过图像检索模块获取对应的文字图像,根据文字放大背景,VaLM使用视觉知识聚合层,通过参与文本背景和图像中的视觉知识,使多式基于语言建模。我们评价了各种多式常识推理任务的拟议模型,这需要视觉信息才能成功。VaLM在对象颜色和尺寸推理上分别获得8.66%和+37.81%的精度,比仅文本基准高出了相当的精度。