In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.
翻译:在这篇论文中,我们提出了一种端到端的检索增强视觉语言模型(REVEAL),它学习将世界知识编码到大规模的存储器中,并从中检索以回答知识密集型查询。REVEAL由4个关键组件组成:存储器、编码器、检索器和生成器。大规模存储器通过统一的编码器对各种来源的多模态世界知识(例如图文对、问题回答对、知识图谱三元组等)进行编码。检索器在存储器中查找最相关的知识条目,生成器将检索到的知识与输入查询融合以生成输出。我们方法的一个关键创新是存储器、编码器、检索器和生成器都在大量数据上进行了端到端的预训练。此外,我们的方法可以使用多样化的多模态知识来源,这被证明可以带来显著的利益。我们展示REVEAL在视觉问答和图像字幕生成方面取得了最先进的结果。
(注意:Proper Noun不需要保留英文原名)