Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision. However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors. We introduce Adversarially Trained PRMs (\texttt{APRM}), where a Generator ($G$) learns to produce reasoning errors to deceive a PRM ($R$), while $R$ concurrently learns to detect them. This interaction yields progressively harder negatives for $R$, improving its robustness and generalization to novel errors without requiring manual step-level labels. Averaged across diverse mathematical reasoning benchmarks, \texttt{APRM} improves solver accuracy by $+3.4$ percentage points (pp) over the strongest PRM baseline. \texttt{APRM} achieves gains of $+5.3$ pp on out-of-distribution tasks.


翻译:过程奖励模型(PRMs)通过提供步骤级监督来增强大型语言模型的推理能力。然而,由于昂贵的人工步骤级标注以及静态训练数据对新错误的泛化能力不足,其广泛应用受到限制。我们提出对抗训练的过程奖励模型(\\texttt{APRM}),其中生成器($G$)学习生成推理错误以欺骗PRM($R$),而$R$同时学习检测这些错误。这种交互为$R$提供了逐步增难的负样本,从而提高了其对新颖错误的鲁棒性和泛化能力,且无需人工步骤级标注。在多样化的数学推理基准测试中,\\texttt{APRM}平均将求解器准确率较最强PRM基线提升了$+3.4$个百分点(pp)。在分布外任务上,\\texttt{APRM}实现了$+5.3$ pp的增益。

0
下载
关闭预览

相关内容

Processing 是一门开源编程语言和与之配套的集成开发环境(IDE)的名称。Processing 在电子艺术和视觉设计社区被用来教授编程基础,并运用于大量的新媒体和互动艺术作品中。
Top
微信扫码咨询专知VIP会员