The problem of knowledge-based visual question answering involves answering questions that require external knowledge in addition to the content of the image. Such knowledge typically comes in a variety of forms, including visual, textual, and commonsense knowledge. The use of more knowledge sources, however, also increases the chance of retrieving more irrelevant or noisy facts, making it difficult to comprehend the facts and find the answer. To address this challenge, we propose Multi-modal Answer Validation using External knowledge (MAVEx), where the idea is to validate a set of promising answer candidates based on answer-specific knowledge retrieval. This is in contrast to existing approaches that search for the answer in a vast collection of often irrelevant facts. Our approach aims to learn which knowledge source should be trusted for each answer candidate and how to validate the candidate using that source. We consider a multi-modal setting, relying on both textual and visual knowledge resources, including images searched using Google, sentences from Wikipedia articles, and concepts from ConceptNet. Our experiments with OK-VQA, a challenging knowledge-based VQA dataset, demonstrate that MAVEx achieves new state-of-the-art results.
翻译:以知识为基础的视觉问题解答问题涉及回答除了图像内容之外还需要外部知识的问题,这种知识通常以多种形式出现,包括视觉、文字和普通知识。但是,使用更多的知识来源也增加了检索更多不相关或吵闹的事实的机会,使得难以理解事实和找到答案。为了应对这一挑战,我们提议使用外部知识(MAVEx)进行多模式回答校验,以验证一套基于回答特定知识检索的有希望的回答候选人。这与在大量收集往往无关的事实中寻找答案的现有方法形成对照。我们的方法是学习每个答复候选人应信任哪些知识来源以及如何利用该来源验证候选人。我们考虑一种多模式,既依靠文字知识资源,又依靠视觉知识资源,包括利用谷歌搜索的图像、维基百科文章的句子和概念网的概念。我们用基于知识的VQA OK-VQA 进行实验,这是一个富有挑战性的VQA 数据集,表明MAVEVEVIx实现了新的状态结果。