We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented conversational systems. Recent approaches to this problem leveraging Transformer-based models have yielded great results. However, training these models is expensive, both in terms of computational resources and time. Additionally, collecting high quality annotated dialogue datasets remains a challenge for researchers because of the extensive annotation required for training these models. Driven by the recent success of pre-trained language models and prompt-based learning, we explore prompt-based few-shot learning for Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage prompt-based language modelling task and train language models for both tasks and present a comprehensive empirical analysis of their separate and joint performance. We demonstrate the potential of prompt-based methods in few-shot learning for DST and provide directions for future improvement.
翻译:我们处理面向任务的对话系统 " 对话信仰国家跟踪 " (DST)问题,最近利用以变换者为基础的模式解决这一问题的办法取得了巨大成果,然而,在计算资源和时间方面,培训这些模式费用昂贵,此外,收集高质量的附加说明的对话数据集对于研究人员来说仍然是一项挑战,因为培训这些模式需要大量说明。 受培训前语言模式和快速学习最近成功推动,我们探索了基于快速的、短片的学习,用于对话信仰国家跟踪。我们把DST问题作为基于两个阶段的快速语言建模任务,为这两项任务培训语言模型,并展示了各自单独和共同业绩的全面经验分析。我们展示了在短片学习中快速采用方法的潜力,为DST提供了未来的改进方向。