Multimodal Large Language Models (MLLMs) have shown impressive capabilities in jointly understanding text, images, and videos, often evaluated via Visual Question Answering (VQA). However, even state-of-the-art MLLMs struggle with domain-specific or knowledge-intensive queries, where relevant information is underrepresented in pre-training data. Knowledge-based VQA (KB-VQA) addresses this by retrieving external documents to condition answer generation, but current retrieval-augmented approaches suffer from low precision, noisy passages, and limited reasoning. To address this, we propose ReAG, a novel Reasoning-Augmented Multimodal RAG approach that combines coarse- and fine-grained retrieval with a critic model that filters irrelevant passages, ensuring high-quality additional context. The model follows a multi-stage training strategy leveraging reinforcement learning to enhance reasoning over retrieved content, while supervised fine-tuning serves only as a cold start. Extensive experiments on Encyclopedic-VQA and InfoSeek demonstrate that ReAG significantly outperforms prior methods, improving answer accuracy and providing interpretable reasoning grounded in retrieved evidence. Our source code is publicly available at: https://github.com/aimagelab/ReAG.
翻译:多模态大语言模型(MLLMs)在联合理解文本、图像和视频方面展现出卓越能力,通常通过视觉问答(VQA)进行评估。然而,即使是当前最先进的MLLMs在处理领域特定或知识密集型查询时仍面临困难,因为相关信息在预训练数据中代表性不足。基于知识的视觉问答(KB-VQA)通过检索外部文档来辅助答案生成以应对此问题,但现有的检索增强方法存在精度低、段落噪声大以及推理能力有限等缺陷。为解决这些问题,我们提出了ReAG,一种新颖的推理增强多模态检索增强生成方法,该方法结合了粗粒度与细粒度检索,并引入一个批判模型来过滤无关段落,从而确保高质量的附加上下文。该模型采用多阶段训练策略,利用强化学习增强对检索内容的推理能力,而监督微调仅作为冷启动。在Encyclopedic-VQA和InfoSeek数据集上的大量实验表明,ReAG显著优于先前方法,提高了答案准确性,并提供了基于检索证据的可解释推理。我们的源代码已公开于:https://github.com/aimagelab/ReAG。