Learning high-quality dialogue representations is essential for solving a variety of dialogue-oriented tasks, especially considering that dialogue systems often suffer from data scarcity. In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks. DSE learns from dialogues by taking consecutive utterances of the same dialogue as positive pairs for contrastive learning. Despite its simplicity, DSE achieves significantly better representation capability than other dialogue representation and universal sentence representation models. We evaluate DSE on five downstream dialogue tasks that examine dialogue representation at different semantic granularities. Experiments in few-shot and zero-shot settings show that DSE outperforms baselines by a large margin. For example, it achieves 13% average performance improvement over the strongest unsupervised baseline in 1-shot intent classification on 6 datasets. We also provide analyses on the benefits and limitations of our model.
翻译:学习高质量的对话代表对于解决各种以对话为导向的任务至关重要,特别是考虑到对话系统往往缺乏数据。本文介绍一种自我监督的对比式学习方法,即对话句嵌入(DSE),它学习适合广泛对话任务的有效对话代表。DSE通过连续用同一种对话的表达方式作为正面学习的正对来学习,从对话中学习。尽管它简单,但DSE比其他对话代表形式和通用判刑代表模式具有更大的代表性能力。我们评估了五个下游对话任务DSE,该任务审查不同语义颗粒的对话代表情况。在微小和零光环境中进行的实验显示,DSE的基线大大超过基准值。例如,在6个数据集的一发意向分类中,它比最强、最不受控制的基线平均提高了13%的绩效。我们还分析了模型的优缺点和局限性。