Large vision-language models struggle with medical video understanding, where spatial precision, temporal reasoning, and clinical semantics are critical. To address this, we first introduce \textbf{MedVidBench}, a large-scale benchmark of 531,850 video-instruction pairs across 8 medical sources spanning video, segment, and frame-level tasks, curated through a rigorous quality assurance pipeline with expert-guided prompting and dual-model validation. While supervised fine-tuning on MedVidBench yields noticeable gains, standard Reinforcement Learning (RL) fails due to imbalanced reward scales across datasets, which destabilizes optimization and leads to training collapse. To overcome this, we introduce \textbf{MedGRPO}, a novel RL framework for balanced multi-dataset training with two key innovations: (1) \emph{cross-dataset reward normalization} that maps each dataset's median performance to a common reward value, ensuring fair optimization regardless of difficulty, and (2) a \emph{medical LLM judge} that evaluates caption quality on five clinical dimensions through comparative similarity scoring. Supervised fine-tuning Qwen2.5-VL-7B on MedVidBench substantially outperforms GPT-4.1 and Gemini-2.5-Flash across all tasks, demonstrating MedVidBench's efficacy, while our MedGRPO framework further improves upon the SFT baseline across grounding and captioning tasks. Our work establishes a foundational benchmark and robust training methodology for advancing vision-language models in medical domains. Our project website is available at https://yuhaosu.github.io/MedGRPO/.
翻译:大型视觉语言模型在医学视频理解任务中面临挑战,该任务对空间精度、时序推理和临床语义均有严格要求。为解决此问题,我们首先提出了 \\textbf{MedVidBench},这是一个大规模基准数据集,包含来自8个医学来源的531,850个视频-指令对,覆盖视频级、片段级和帧级任务,并通过严格的质控流程构建,包括专家引导提示和双模型验证。虽然在MedVidBench上进行监督微调能带来显著性能提升,但标准强化学习(RL)因不同数据集间奖励尺度不平衡而导致优化不稳定及训练崩溃。为此,我们提出了 \\textbf{MedGRPO},一种用于平衡多数据集训练的新型强化学习框架,其包含两项关键创新:(1)\\emph{跨数据集奖励归一化},将每个数据集的平均性能映射到统一的奖励值,确保无论任务难度如何均能公平优化;(2)\\emph{医学大语言模型评判器},通过比较相似度评分在五个临床维度上评估描述质量。在MedVidBench上对Qwen2.5-VL-7B进行监督微调后,其在所有任务上均显著优于GPT-4.1和Gemini-2.5-Flash,证明了MedVidBench的有效性;而我们的MedGRPO框架进一步在定位和描述任务上超越了监督微调基线。本研究为推进视觉语言模型在医学领域的应用建立了基础性基准和鲁棒的训练方法。项目网站详见 https://yuhaosu.github.io/MedGRPO/。