Multi-Intent Spoken Language Understanding (SLU), a novel and more complex scenario of SLU, is attracting increasing attention. Unlike traditional SLU, each intent in this scenario has its specific scope. Semantic information outside the scope even hinders the prediction, which tremendously increases the difficulty of intent detection. More seriously, guiding slot filling with these inaccurate intent labels suffers error propagation problems, resulting in unsatisfied overall performance. To solve these challenges, in this paper, we propose a novel Scope-Sensitive Result Attention Network (SSRAN) based on Transformer, which contains a Scope Recognizer (SR) and a Result Attention Network (RAN). Scope Recognizer assignments scope information to each token, reducing the distraction of out-of-scope tokens. Result Attention Network effectively utilizes the bidirectional interaction between results of slot filling and intent detection, mitigating the error propagation problem. Experiments on two public datasets indicate that our model significantly improves SLU performance (5.4\% and 2.1\% on Overall accuracy) over the state-of-the-art baseline.
翻译:多点口语理解(SLU)是一个新颖的、更为复杂的SLU情景,它正在引起越来越多的关注。与传统的SLU不同的是,这一情景的每个意图都有其具体范围。范围以外的语义信息甚至阻碍预测,这大大增加了发现意图的困难。更严重的是,用这些不准确的意向标签来指导填补空档会遇到错误传播问题,导致总体性能不尽人意。为了解决这些挑战,我们在本文件中提议以变异器为基础,建立一个新的范围敏感效果关注网络(SSRAN),其中包括一个范围识别器(SR)和一个结果关注网络(RAN)。范围识别器分配范围信息将每个符号都包括在内,减少范围外符号的分散。结果注意网络有效利用了填补空档的结果和意图探测结果之间的双向互动,减轻了错误传播问题。在两个公共数据集上进行的实验表明,我们的模型大大改进了SLU的性能(5.4 ⁇ 和关于总体精确度的2.1 ⁇ ),超过了最新基线。