Users often ask dialogue systems ambiguous questions that require clarification. We show that current language models rarely ask users to clarify ambiguous questions and instead provide incorrect answers. To address this, we introduce CLAM: a framework for getting language models to selectively ask for clarification about ambiguous user questions. In particular, we show that we can prompt language models to detect whether a given question is ambiguous, generate an appropriate clarifying question to ask the user, and give a final answer after receiving clarification. We also show that we can simulate users by providing language models with privileged information. This lets us automatically evaluate multi-turn clarification dialogues. Finally, CLAM significantly improves language models' accuracy on mixed ambiguous and unambiguous questions relative to SotA.
翻译:用户经常问对话系统需要澄清的模糊问题。 我们显示,当前的语言模式很少要求用户澄清模糊问题,而是提供错误的答案。 为了解决这个问题,我们引入了CLAM:一个让语言模式有选择地要求澄清模糊用户问题的框架。特别是,我们显示,我们可以促使语言模式发现某个问题是否模糊不清,产生一个适当的澄清问题来询问用户,并在得到澄清后给出最后答案。我们还表明,我们可以通过提供有特权信息的语言模式来模拟用户。这让我们自动评估多极澄清对话。 最后,CLAM显著提高了语言模式在与SotA有关的混合的模糊和毫不含糊问题上的准确性。