Reward modeling has become a cornerstone of aligning large language models (LLMs) with human preferences. Yet, when extended to subjective and open-ended domains such as role play, existing reward models exhibit severe degradation, struggling to capture nuanced and persona-grounded human judgments. To address this gap, we introduce RoleRMBench, the first systematic benchmark for reward modeling in role-playing dialogue, covering seven fine-grained capabilities from narrative management to role consistency and engagement. Evaluation on RoleRMBench reveals large and consistent gaps between general-purpose reward models and human judgment, particularly in narrative and stylistic dimensions. We further propose RoleRM, a reward model trained with Continuous Implicit Preferences (CIP), which reformulates subjective evaluation as continuous consistent pairwise supervision under multiple structuring strategies. Comprehensive experiments show that RoleRM surpasses strong open- and closed-source reward models by over 24% on average, demonstrating substantial gains in narrative coherence and stylistic fidelity. Our findings highlight the importance of continuous preference representation and annotation consistency, establishing a foundation for subjective alignment in human-centered dialogue systems.
翻译:奖励建模已成为将大型语言模型(LLMs)与人类偏好对齐的基石。然而,当扩展到主观性和开放性的领域(如角色扮演)时,现有奖励模型表现出严重的性能退化,难以捕捉基于角色设定的细微人类判断。为弥补这一差距,我们提出了 RoleRMBench,这是首个针对角色扮演对话中奖励建模的系统性基准,涵盖从叙事管理到角色一致性和参与度等七个细粒度能力维度。在 RoleRMBench 上的评估显示,通用奖励模型与人类判断之间存在显著且一致的差距,尤其在叙事和风格维度上。我们进一步提出了 RoleRM,一种基于连续隐式偏好(CIP)训练的奖励模型,该方法通过多种结构化策略将主观评估重构为连续一致的成对监督。综合实验表明,RoleRM 平均超越开源和闭源强基线奖励模型超过 24%,在叙事连贯性和风格保真度方面展现出显著提升。我们的研究结果强调了连续偏好表示与标注一致性的重要性,为以人为中心的对话系统中的主观对齐奠定了基础。