A large-scale conversational agent can suffer from understanding user utterances with various ambiguities such as ASR ambiguity, intent ambiguity, and hypothesis ambiguity. When ambiguities are detected, the agent should engage in a clarifying dialog to resolve the ambiguities before committing to actions. However, asking clarifying questions for all the ambiguity occurrences could lead to asking too many questions, essentially hampering the user experience. To trigger clarifying questions only when necessary for the user satisfaction, we propose a neural self-attentive model that leverages the hypotheses with ambiguities and contextual signals. We conduct extensive experiments on five common ambiguity types using real data from a large-scale commercial conversational agent and demonstrate significant improvement over a set of baseline approaches.
翻译:大规模谈话代理可能因理解用户对诸如ASR模棱两可、意图含糊不清和假设含糊不清等各种模糊不清的言论而受到损害。当发现模棱两可时,该代理应进行澄清对话,在承诺采取行动之前解决模糊不清的问题。然而,就所有模糊不清事件提出澄清问题可能会导致提出过多的问题,从根本上妨碍用户的经验。为了在用户满意需要时才触发澄清问题,我们提议一种神经自觉模式,利用模糊不清和背景信号来利用这些假设。我们利用大型商业谈话代理提供的真实数据,对五种共同的模糊不清类型进行了广泛的实验,并展示了对一套基线方法的重大改进。