In task-oriented dialogue systems, Dialogue State Tracking (DST) aims to extract users' intentions from the dialogue history. Currently, most existing approaches suffer from error propagation and are unable to dynamically select relevant information when utilizing previous dialogue states. Moreover, the relations between the updates of different slots provide vital clues for DST. However, the existing approaches rely only on predefined graphs to indirectly capture the relations. In this paper, we propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states and migrate the gap of utilization between training and testing. Thus, it can dynamically exploit previous dialogue states and avoid introducing error propagation simultaneously. Further, we propose an inter-slot contrastive learning loss to effectively capture the slot co-update relations from dialogue context. Experiments are conducted on the widely used MultiWOZ 2.0 and MultiWOZ 2.1 datasets. The experimental results show that our proposed model achieves the state-of-the-art performance for DST.
翻译:在以任务为导向的对话系统中,“对话国跟踪”旨在从对话历史中提取用户的意图。目前,大多数现有办法都存在错误传播,无法在使用前一次对话状态时动态选择相关信息。此外,不同时段更新之间的关系为DST提供了重要线索。然而,现有办法仅依靠预先定义的图表间接地捕捉关系。在本文件中,我们提议建立一个对话国蒸馏网络(DSDN),以利用先前对话国的相关信息,并转移培训和测试之间的利用差距。因此,它可以动态地利用以前的对话状态,避免同时引入错误传播。此外,我们提议进行一系列对比式学习损失,以便从对话背景中有效地捕捉到空档段的组合关系。对广泛使用的多WOZ2.0和多WOZ 2.1数据集进行了实验。实验结果显示,我们提议的模型取得了DST的最新性能。