Medical question answering requires advanced reasoning that integrates domain knowledge with logical inference. However, existing large language models (LLMs) often generate reasoning chains that lack factual accuracy and clinical reliability. We propose Ranked Preference Reinforcement Optimization (RPRO), a novel framework that combines reinforcement learning with preference-driven reasoning refinement to enhance clinical chain-of-thought (CoT) performance. RPRO distinguishes itself from prior approaches by employing task-adaptive reasoning templates and a probabilistic evaluation mechanism that aligns model outputs with established clinical workflows, while automatically identifying and correcting low-quality reasoning chains. Unlike traditional pairwise preference methods, RPRO introduces a groupwise ranking optimization based on the Bradley--Terry model and incorporates KL-divergence regularization for stable training. Experiments on PubMedQA, MedQA-USMLE, and a real-world clinical dataset from Far Eastern Memorial Hospital (FEMH) demonstrate consistent improvements over strong baselines. Remarkably, our 2B-parameter model outperforms much larger 7B--20B models, including medical-specialized variants. These findings demonstrate that combining preference optimization with quality-driven refinement provides a scalable and clinically grounded approach to building more reliable medical LLMs.
翻译:医学问答需要融合领域知识与逻辑推理的高级推理能力。然而,现有的大语言模型(LLMs)生成的推理链往往缺乏事实准确性与临床可靠性。我们提出了一种新颖的框架——基于排序偏好的强化优化(Ranked Preference Reinforcement Optimization, RPRO),该框架将强化学习与偏好驱动的推理精炼相结合,以提升临床思维链(CoT)的性能。RPRO区别于先前方法之处在于,它采用了任务自适应的推理模板和一种概率评估机制,该机制使模型输出与既定的临床工作流程保持一致,同时自动识别并修正低质量推理链。不同于传统的成对偏好方法,RPRO引入了基于Bradley-Terry模型的组间排序优化,并融入了KL散度正则化以确保训练稳定性。在PubMedQA、MedQA-USMLE以及来自远东纪念医院(FEMH)的真实世界临床数据集上的实验表明,该方法相较于强基线模型取得了持续的性能提升。值得注意的是,我们的20亿参数模型表现优于规模大得多的70亿至200亿参数模型,包括医学专用变体。这些发现表明,将偏好优化与质量驱动的精炼相结合,为构建更可靠的医学大语言模型提供了一种可扩展且临床基础扎实的途径。