Large language models (LLMs) are increasingly deployed as conversational assistants in open-domain, multi-turn settings, where users often provide incomplete or ambiguous information. However, existing LLM-focused clarification benchmarks primarily assume single-turn interactions or cooperative users, limiting their ability to evaluate clarification behavior in realistic settings. We introduce \textbf{ClarifyMT-Bench}, a benchmark for multi-turn clarification grounded in a five-dimensional ambiguity taxonomy and a set of six behaviorally diverse simulated user personas. Through a hybrid LLM-human pipeline, we construct 6,120 multi-turn dialogues capturing diverse ambiguity sources and interaction patterns. Evaluating ten representative LLMs uncovers a consistent under-clarification bias: LLMs tend to answer prematurely, and performance degrades as dialogue depth increases. To mitigate this, we propose \textbf{ClarifyAgent}, an agentic approach that decomposes clarification into perception, forecasting, tracking, and planning, substantially improving robustness across ambiguity conditions. ClarifyMT-Bench establishes a reproducible foundation for studying when LLMs should ask, when they should answer, and how to navigate ambiguity in real-world human-LLM interactions.
翻译:大语言模型(LLMs)正越来越多地被部署为开放域多轮对话场景下的对话助手,用户在其中常常提供不完整或模糊的信息。然而,现有的聚焦于LLM的澄清评测基准主要假设单轮交互或用户完全配合,限制了其评估LLM在真实场景下澄清行为的能力。我们引入了**ClarifyMT-Bench**,这是一个基于五维模糊性分类法和一组六个行为各异的模拟用户角色构建的多轮澄清评测基准。通过一个混合的LLM-人工流程,我们构建了6,120个多轮对话,捕捉了多样化的模糊性来源和交互模式。对十个代表性LLM的评估揭示了一种一致的澄清不足倾向:LLMs倾向于过早回答,并且其性能随着对话深度的增加而下降。为了缓解这一问题,我们提出了**ClarifyAgent**,一种将澄清分解为感知、预测、追踪和规划的智能体方法,显著提升了模型在不同模糊性条件下的鲁棒性。ClarifyMT-Bench为研究LLMs何时应该提问、何时应该回答,以及如何在现实世界的人机交互中处理模糊性,建立了一个可复现的基础。