Reinforcement learning (RL) has shown promise in enhancing the general Chain-of-Thought (CoT) reasoning capabilities of multimodal large language models (MLLMs). However, when applied to improve general CoT reasoning, existing RL frameworks often struggle to generalize beyond the training distribution. To address this, we propose NoisyGRPO, a systematic multimodal RL framework that introduces controllable noise into visual inputs for enhanced exploration and explicitly models the advantage estimation process via a Bayesian framework. Specifically, NoisyGRPO improves RL training by: (1) Noise-Injected Exploration Policy: Perturbing visual inputs with Gaussian noise to encourage exploration across a wider range of visual scenarios; and (2) Bayesian Advantage Estimation: Formulating advantage estimation as a principled Bayesian inference problem, where the injected noise level serves as a prior and the observed trajectory reward as the likelihood. This Bayesian modeling fuses both sources of information to compute a robust posterior estimate of trajectory advantage, effectively guiding MLLMs to prefer visually grounded trajectories over noisy ones. Experiments on standard CoT quality, general capability, and hallucination benchmarks demonstrate that NoisyGRPO substantially improves generalization and robustness, especially in RL settings with small-scale MLLMs such as Qwen2.5-VL 3B. The project page is available at https://artanic30.github.io/project_pages/NoisyGRPO/.
翻译:强化学习(RL)在提升多模态大语言模型(MLLMs)的通用思维链(CoT)推理能力方面展现出潜力。然而,现有RL框架在应用于改进通用CoT推理时,往往难以泛化至训练分布之外。为解决此问题,我们提出NoisyGRPO,一个系统化的多模态RL框架,其通过向视觉输入引入可控噪声以增强探索,并借助贝叶斯框架显式建模优势估计过程。具体而言,NoisyGRPO通过以下方式改进RL训练:(1)噪声注入探索策略:对视觉输入施加高斯噪声扰动,以鼓励模型在更广泛的视觉场景中进行探索;(2)贝叶斯优势估计:将优势估计构建为一个原则性的贝叶斯推断问题,其中注入的噪声水平作为先验,观测到的轨迹奖励作为似然。该贝叶斯建模融合了这两类信息,以计算轨迹优势的稳健后验估计,从而有效引导MLLMs偏好基于视觉的轨迹而非噪声干扰的轨迹。在标准CoT质量、通用能力及幻觉基准测试上的实验表明,NoisyGRPO显著提升了泛化性与鲁棒性,尤其是在使用小规模MLLMs(如Qwen2.5-VL 3B)的RL设置中。项目页面详见 https://artanic30.github.io/project_pages/NoisyGRPO/。