Goal oriented dialogue systems have become a prominent customer-care interaction channel for most businesses. However, not all interactions are smooth, and customer intent misunderstanding is a major cause of dialogue failure. We show that intent prediction can be improved by training a deep text-to-text neural model to generate successive user utterances from unlabeled dialogue data. For that, we define a multi-task training regime that utilizes successive user-utterance generation to improve the intent prediction. Our approach achieves the reported improvement due to two complementary factors: First, it uses a large amount of unlabeled dialogue data for an auxiliary generation task. Second, it uses the generated user utterance as an additional signal for the intent prediction model. Lastly, we present a novel look-ahead approach that uses user utterance generation to improve intent prediction in inference time. Specifically, we generate counterfactual successive user utterances for conversations with ambiguous predicted intents, and disambiguate the prediction by reassessing the concatenated sequence of available and generated utterances.
翻译:面向目标的对话系统已成为大多数企业的一个突出的客户护理互动渠道。然而,并非所有互动都顺利,客户的意向误解是对话失败的一个主要原因。我们表明,通过培训深层文本到文本神经模型,从未贴标签的对话数据中产生连续的用户语音,可以改进意向预测。为此,我们定义了一个多任务培训制度,利用连续的用户-简便生成来改进意向预测。我们的方法由于两个互补因素而实现了所报告的改进:首先,它使用大量未贴标签的对话数据进行辅助性一代的任务。其次,它使用生成的用户语音作为意向预测模型的额外信号。最后,我们展示了一种新颖的外观方法,利用用户的语音生成来改进提前时间的意向预测。具体地说,我们用模糊的预测意图来产生相反的连续用户语音,并通过重新评估可提供和生成的言语的相配对序列,使预测变得模糊不清。