Humans understand long and complex texts by relying on a holistic semantic representation of the content. This global view helps organize prior knowledge, interpret new information, and integrate evidence dispersed across a document, as revealed by the Mindscape-Aware Capability of humans in psychology. Current Retrieval-Augmented Generation (RAG) systems lack such guidance and therefore struggle with long-context tasks. In this paper, we propose Mindscape-Aware RAG (MiA-RAG), the first approach that equips LLM-based RAG systems with explicit global context awareness. MiA-RAG builds a mindscape through hierarchical summarization and conditions both retrieval and generation on this global semantic representation. This enables the retriever to form enriched query embeddings and the generator to reason over retrieved evidence within a coherent global context. We evaluate MiA-RAG across diverse long-context and bilingual benchmarks for evidence-based understanding and global sense-making. It consistently surpasses baselines, and further analysis shows that it aligns local details with a coherent global representation, enabling more human-like long-context retrieval and reasoning.
翻译:人类通过依赖内容的整体语义表征来理解长而复杂的文本。这种全局视角有助于组织先验知识、解释新信息并整合分散在文档中的证据,正如心理学研究所揭示的人类心智图景感知能力所展现的。当前的检索增强生成系统缺乏此类引导,因此在长上下文任务中表现不佳。本文提出心智图景感知检索增强生成方法,这是首个为基于大语言模型的RAG系统赋予显式全局上下文感知能力的技术。该方法通过分层摘要构建心智图景,并基于该全局语义表征同时指导检索与生成过程。这使得检索器能够形成增强的查询嵌入,而生成器能够在连贯的全局上下文中对检索到的证据进行推理。我们在多种长上下文及双语基准测试中评估了该方法在证据理解与全局意义构建方面的性能。实验表明其持续超越基线模型,进一步分析显示该方法能将局部细节与连贯的全局表征对齐,从而实现更类人的长上下文检索与推理能力。