With the development of pre-trained language models, remarkable success has been witnessed in dialogue understanding (DU). However, current DU approaches usually employ independent models for each distinct DU task without considering shared knowledge across different DU tasks. In this paper, we propose a unified generative dialogue understanding framework, named {\em UniDU}, to achieve effective information exchange across diverse DU tasks. Here, we reformulate all DU tasks into a unified prompt-based generative model paradigm. More importantly, a novel model-agnostic multi-task training strategy (MATS) is introduced to dynamically adapt the weights of diverse tasks for best knowledge sharing during training, based on the nature and available data of each task. Experiments on ten DU datasets covering five fundamental DU tasks show that the proposed UniDU framework largely outperforms task-specific well-designed methods on all tasks. MATS also reveals the knowledge-sharing structure of these tasks. Finally, UniDU obtains promising performance in the unseen dialogue domain, showing the great potential for generalization.
翻译:由于开发了预先培训的语言模式,在对话理解(Du)方面取得了显著的成功;然而,目前的Du方法通常在不考虑不同Du任务之间共享知识的情况下,为每项不同的Du任务采用独立的模式;在本文件中,我们提议了一个统一的基因对话理解框架,名为 {emUnidu},以实现各种Du任务之间的有效信息交流;在这里,我们将所有Du任务改成一个统一的、基于迅速的基因化模式模式;更重要的是,根据每项任务的性质和现有数据,引入了一种新的模式-不可知的多任务培训战略,以动态地调整不同任务在培训期间最佳知识共享的权重;关于涵盖Du任务五大基本任务的10个DU数据集的实验表明,拟议的UDU框架基本上超越了所有任务的具体任务设计方法;MATS还揭示了这些任务的知识分享结构;最后,UDu在无形对话领域取得了良好的业绩,显示了普遍化的巨大潜力。