Modern information retrieval (IR) must reconcile short, ambiguous queries with increasingly diverse and dynamic corpora. Query expansion (QE) remains central to alleviating vocabulary mismatch, yet the design space has shifted with pre-trained and large language models (PLMs, LLMs). In this survey, we organize recent work along four complementary dimensions: the point of injection (implicit/embedding vs. selection-based explicit), grounding and interaction (from zero-grounding prompts to multi-round retrieve-expand loops), learning and alignment (SFT/PEFT/DPO), and knowledge-graph integration. A model-centric taxonomy is also outlined, spanning encoder-only, encoder-decoder, decoder-only, instruction-tuned, and domain or multilingual variants, with affordances for QE such as contextual disambiguation, controllable generation, and zero-shot or few-shot reasoning. Practice-oriented guidance specifies where neural QE helps most: first-stage retrieval, multi-query fusion, re-ranking, and retrieval-augmented generation (RAG). The survey compares traditional and neural QE across seven aspects and maps applications in web search, biomedicine, e-commerce, open-domain question answering/RAG, conversational and code search, and cross-lingual settings. The survey concludes with an agenda focused on reliable, safe, efficient, and adaptive QE, offering a principled blueprint for deploying and combining techniques under real-world constraints.
翻译:现代信息检索必须应对简短、模糊的查询与日益多样化和动态变化的语料库之间的矛盾。查询扩展在缓解词汇失配问题中仍处于核心地位,但其设计空间已随着预训练和大规模语言模型的兴起而发生转变。本综述从四个互补维度梳理近期研究:注入点(基于隐式/嵌入的方法与基于选择的显式方法)、基础与交互(从零基础提示到多轮检索-扩展循环)、学习与对齐(监督微调/参数高效微调/直接偏好优化)以及知识图谱集成。同时,本文还构建了以模型为中心的分类体系,涵盖仅编码器、编码器-解码器、仅解码器、指令微调以及领域或多语言变体等模型类型,并分析了它们在查询扩展中的适用性,如上下文消歧、可控生成以及零样本或少样本推理。面向实践的指导明确了神经查询扩展在哪些场景中最为有效:首阶段检索、多查询融合、重排序以及检索增强生成。本文从七个方面比较了传统与神经查询扩展方法,并梳理了其在网络搜索、生物医学、电子商务、开放域问答/检索增强生成、对话与代码搜索以及跨语言环境中的应用。综述最后提出了一个聚焦于可靠、安全、高效和自适应查询扩展的研究议程,为在实际约束条件下部署和组合相关技术提供了原则性蓝图。