In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.
翻译:在本文中,我们提出一个端到端检索增强视觉语言模型(REVEAL),该模型将世界知识编码成一个大型的内存,并从中取回知识以回答知识密集型的问题。REVEAL由四个关键组成部分组成:内存、编码器、检索器和生成器。大型内存通过一个统一的编码器,将世界多式知识的各种来源(如图像-文本对、问答对、知识图形三角等)编码。检索器在记忆中找到最相关的知识条目,而生成器则将检索到的知识与输入查询相结合,以产生输出。我们的方法中的一个关键新颖之处是,记忆、编码器、检索器和生成器都是在大量数据上经过预先训练的端到端。此外,我们的方法可以使用多种多式知识来源(例如图像-文本对配对、问答对、知识图形三联等),这可以带来重大收益。我们显示REVEAL在视觉问题回答和图像说明上取得了最先进的结果。