As the functionality of dialogue systems evolves, hybrid dialogue systems that accomplish user-specific goals and participate in open-topic chitchat with users are attracting growing attention. Existing research learns both tasks concurrently utilizing a multi-task fusion technique but ignores the negative transfer phenomenon induced by the unique textual style differences. Therefore, contrastive learning based on the latent variable model is used to decouple the various textual genres in the latent space. We devise supervised and self-supervised positive and negative sample constructions for diverse datasets. In addition, to capitalize on the style information contained in the decoupled latent variables, we employ a style prefix that incorporates latent variables further to control the generation of responses with varying styles. We performed extensive experiments on three dialogue datasets, including a hybrid dialogue dataset and two task-oriented dialogue datasets. The experimental results demonstrate that our method can mitigate the negative style transfer issue and achieves state-of-the-art performance on multiple dialogue datasets.
翻译:随着对话系统的功能不断演变,实现用户特定目标并参与与用户的开放式接触的混合对话系统正在引起越来越多的注意。现有的研究同时学习多种任务融合技术,但忽略了独特的文本风格差异引起的负面转移现象。因此,根据潜在变量模型进行的对比学习被用来分离潜伏空间中的各种文本元体。我们为不同的数据集设计了受监督和自我监督的正式和负式样本构造。此外,为了利用分解的潜在变量中包含的风格信息,我们采用了一种样式前缀,将潜在变量纳入其中,以进一步控制不同风格的应对措施的生成。我们对三个对话数据集进行了广泛的实验,包括混合对话数据集和两个任务导向对话数据集。实验结果表明,我们的方法可以减轻负面风格转移问题,并在多个对话数据集中实现最先进的性能。