Collecting dialogue state labels, slots and values, for learning dialogue state tracking (DST) models can be costly, especially with the wide application of dialogue systems in new-rising domains. In this paper, we focus on how to learn a DST model efficiently with limited labeled data. We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism. This framework aims to utilize the language understanding and generation ability of pre-trained language models (PLM). First, we design value-based prompt functions to probe the DST-related knowledge from PLM, which do not rely on the known ontology of slots. Further, an inverse prompt mechanism is utilized to self-check the "prompted" knowledge and help the PLM understand the essence of DST task further. Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods. It indicates that DST-related knowledge can be probed from PLM and utilized to address low-resource DST efficiently with the help of prompt learning.
翻译:收集对话状态标签、空档和价值观,用于学习对话状态跟踪(DST)模式,成本可能很高,特别是在新的领域广泛应用对话系统的情况下。在本文件中,我们侧重于如何以有限的标签数据高效率地学习DST模式。我们设计了由两个主要组成部分组成的微快的DST快速学习框架:基于价值的快速和反向快速机制。这个框架旨在利用预先培训的语言模型的语言理解和生成能力。首先,我们设计基于价值的快速功能,以探寻来自PLM的DST相关知识,这些知识并不依赖已知的空档文本。此外,我们利用一个反向快速机制自我检查“快速”知识,帮助个人磁盘进一步理解DST任务的本质。实验显示,我们的模型可以产生未知的空格,并超越现有的最先进的小光谱方法。它表明,DST相关知识可以从PLM中探测,并用于高效地处理低资源DST,有助于迅速学习。