Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) have brought a profound paradigm shift in LLM training, particularly for complex reasoning tasks. However, the on-policy nature of RLVR introduces a unique privacy leakage pattern: since training relies on self-generated responses without fixed ground-truth outputs, membership inference must now determine whether a given prompt (independent of any specific response) is used during fine-tuning. This creates a threat where leakage arises not from answer memorization. To audit this novel privacy risk, we propose Divergence-in-Behavior Attack (DIBA), the first membership inference framework specifically designed for RLVR. DIBA shifts the focus from memorization to behavioral change, leveraging measurable shifts in model behavior across two axes: advantage-side improvement (e.g., correctness gain) and logit-side divergence (e.g., policy drift). Through comprehensive evaluations, we demonstrate that DIBA significantly outperforms existing baselines, achieving around 0.8 AUC and an order-of-magnitude higher TPR@0.1%FPR. We validate DIBA's superiority across multiple settings--including in-distribution, cross-dataset, cross-algorithm, black-box scenarios, and extensions to vision-language models. Furthermore, our attack remains robust under moderate defensive measures. To the best of our knowledge, this is the first work to systematically analyze privacy vulnerabilities in RLVR, revealing that even in the absence of explicit supervision, training data exposure can be reliably inferred through behavioral traces.
翻译:针对大语言模型(LLMs)的成员推理攻击(MIAs)在模型训练的各个阶段均构成显著的隐私风险。近期,可验证奖励强化学习(RLVR)的进展为大语言模型训练带来了深刻的范式转变,尤其在复杂推理任务中。然而,RLVR的在线策略特性引入了一种独特的隐私泄露模式:由于训练依赖于自生成响应而非固定的真实输出,成员推理现在必须判断给定提示(独立于任何特定响应)是否在微调过程中被使用。这造成了一种威胁,即泄露并非源于答案记忆。为评估这一新型隐私风险,我们提出了行为差异攻击(DIBA),这是首个专门为RLVR设计的成员推理框架。DIBA将关注点从记忆转向行为变化,利用模型行为在两个轴上的可测量偏移:优势侧改进(如正确性提升)和对数侧差异(如策略漂移)。通过全面评估,我们证明DIBA显著优于现有基线方法,达到约0.8的AUC值,并在0.1%假正率下的真正率提高一个数量级。我们在多种设置下验证了DIBA的优越性——包括分布内、跨数据集、跨算法、黑盒场景,以及向视觉语言模型的扩展。此外,我们的攻击在适度防御措施下仍保持鲁棒性。据我们所知,这是首个系统分析RLVR隐私漏洞的工作,揭示了即使在缺乏显式监督的情况下,训练数据暴露仍可通过行为轨迹被可靠推断。