Dialogue bots have been widely applied in customer service scenarios to provide timely and user-friendly experience. These bots must classify the appropriate domain of a dialogue, understand the intent of users, and generate proper responses. Existing dialogue pre-training models are designed only for several dialogue tasks and ignore weakly-supervised expert knowledge in customer service dialogues. In this paper, we propose a novel unified knowledge prompt pre-training framework, UFA (\textbf{U}nified Model \textbf{F}or \textbf{A}ll Tasks), for customer service dialogues. We formulate all the tasks of customer service dialogues as a unified text-to-text generation task and introduce a knowledge-driven prompt strategy to jointly learn from a mixture of distinct dialogue tasks. We pre-train UFA on a large-scale Chinese customer service corpus collected from practical scenarios and get significant improvements on both natural language understanding (NLU) and natural language generation (NLG) benchmarks.
翻译:在客户服务情景中广泛应用了对话机器人,以提供及时和方便用户的经验。这些机器人必须对对话的适当领域进行分类,了解用户的意图,并作出适当的反应。现有的对话前培训模式仅针对若干对话任务设计,忽视了客户服务对话中受监管薄弱的专家知识。在本文中,我们提议建立一个新的统一的知识快速培训前框架,UFA(textbf{U}ificated Model\ textb{F}or\ textb{A}ll任务),用于客户服务对话。我们把客户服务对话的所有任务作为统一的文本到文本生成任务,并采用知识驱动的快速战略,共同学习不同对话任务的组合。我们从实际情景中收集的大规模中国客户服务组合培训前UFA,并在自然语言理解(NLU)和自然语言生成(NLG)基准方面取得重大进展。