Spoken language understanding (SLU), which is a core component of the task-oriented dialogue system, has made substantial progress in the research of single-turn dialogue. However, the performance in multi-turn dialogue is still not satisfactory in the sense that the existing multi-turn SLU methods have low portability and compatibility for other single-turn SLU models. Further, existing multi-turn SLU methods do not exploit the historical predicted results when predicting the current utterance, which wastes helpful information. To gap those shortcomings, in this paper, we propose a novel Result-based Portable Framework for SLU (RPFSLU). RPFSLU allows most existing single-turn SLU models to obtain the contextual information from multi-turn dialogues and takes full advantage of predicted results in the dialogue history during the current prediction. Experimental results on the public dataset KVRET have shown that all SLU models in baselines acquire enhancement by RPFSLU on multi-turn SLU tasks.
翻译:口语理解(SLU)是任务导向对话系统的核心组成部分,在单一回合对话的研究方面取得了实质性进展,然而,多回合对话的绩效仍然不令人满意,因为现有的多回合 SLU方法对于其他单一回合 SLU 模型的可移动性和兼容性较低。此外,现有的多回合 SLU 方法在预测当前言论时没有利用历史预测结果,因为目前的言论浪费了有用的信息。为弥补这些缺陷,本文件提出了一个新的基于结果的SLU便携式框架(RPFSLU )。ROPSLU 允许大多数现有的单回合 SLU 模型从多回合对话中获得背景信息,并充分利用当前预测期间对话史上的预测结果。关于公共数据集KVRET的实验结果显示,基准中的所有SLU模型在多回合 SLU任务上都得到了RPFSLU的增强。