Persona-assigned large language models (LLMs) are used in domains such as education, healthcare, and sociodemographic simulation. Yet, they are typically evaluated only in short, single-round settings that do not reflect real-world usage. We introduce an evaluation protocol that combines long persona dialogues (over 100 rounds) and evaluation datasets to create dialogue-conditioned benchmarks that can robustly measure long-context effects. We then investigate the effects of dialogue length on persona fidelity, instruction-following, and safety of seven state-of-the-art open- and closed-weight LLMs. We find that persona fidelity degrades over the course of dialogues, especially in goal-oriented conversations, where models must sustain both persona fidelity and instruction following. We identify a trade-off between fidelity and instruction following, with non-persona baselines initially outperforming persona-assigned models; as dialogues progress and fidelity fades, persona responses become increasingly similar to baseline responses. Our findings highlight the fragility of persona applications in extended interactions and our work provides a protocol to systematically measure such failures.
翻译:角色分配型大语言模型(LLMs)在教育、医疗和社会人口模拟等领域得到应用。然而,现有评估通常仅基于简短的单轮交互场景,未能反映实际使用情况。我们提出一种结合长篇幅角色对话(超过100轮)与评估数据集的评测协议,构建对话条件化基准测试,以稳健测量长上下文效应。随后,我们研究了七种前沿开源与闭源权重LLMs在对话长度对角色保真度、指令遵循及安全性方面的影响。研究发现:角色保真度在对话过程中逐渐衰减,尤其在目标导向对话中,模型需同时维持角色保真度与指令遵循能力;我们观察到保真度与指令遵循之间存在权衡关系——非角色基线模型在初始阶段优于角色分配模型,而随着对话推进与保真度衰减,角色响应逐渐趋近于基线响应。这些发现揭示了扩展交互中角色应用场景的脆弱性,本研究提供的协议可系统化度量此类失效现象。