The success of DeepSeek-R1 demonstrates the immense potential of using reinforcement learning (RL) to enhance LLMs' reasoning capabilities. This paper introduces Retrv-R1, the first R1-style MLLM specifically designed for multimodal universal retrieval, achieving higher performance by employing step-by-step reasoning to produce more accurate retrieval results. We find that directly applying the methods of DeepSeek-R1 to retrieval tasks is not feasible, mainly due to (1) the high computational cost caused by the large token consumption required for multiple candidates with reasoning processes, and (2) the instability and suboptimal results when directly applying RL to train for retrieval tasks. To address these issues, Retrv-R1 introduces an information compression module with a details inspection mechanism, which enhances computational efficiency by reducing the number of tokens while ensuring that critical information for challenging candidates is preserved. Furthermore, a new training paradigm is proposed, including an activation stage using a retrieval-tailored synthetic CoT dataset for more effective optimization, followed by RL with a novel curriculum reward to improve both performance and efficiency. Incorporating these novel designs, Retrv-R1 achieves SOTA performance, high efficiency, and strong generalization ability, as demonstrated by experiments across multiple benchmarks and tasks.
翻译:DeepSeek-R1的成功证明了利用强化学习(RL)增强大语言模型(LLM)推理能力的巨大潜力。本文介绍了Retrv-R1,这是首个专为多模态通用检索设计的R1风格多模态大语言模型(MLLM),它通过采用逐步推理来生成更准确的检索结果,从而实现了更高的性能。我们发现,直接将DeepSeek-R1的方法应用于检索任务是不可行的,主要原因在于:(1)为多个候选结果提供推理过程需要消耗大量token,导致计算成本高昂;(2)直接将RL应用于检索任务的训练会带来不稳定性和次优结果。为解决这些问题,Retrv-R1引入了一个带有细节审查机制的信息压缩模块,该模块通过减少token数量来提升计算效率,同时确保为具有挑战性的候选结果保留关键信息。此外,我们提出了一种新的训练范式,包括一个使用针对检索任务定制的合成思维链(CoT)数据集进行更有效优化的激活阶段,以及随后采用新颖课程奖励的强化学习阶段,以同时提升性能和效率。结合这些新颖的设计,Retrv-R1在多个基准测试和任务上的实验表明,其实现了最先进的(SOTA)性能、高效率和强大的泛化能力。