In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems. To address this issue, we propose an OOV robust slot filling model based on multi-level data augmentations to solve the OOV problem from both word and slot perspectives. We present a unified contrastive learning framework, which pull representations of the origin sample and augmentation samples together, to make the model resistant to OOV problems. We evaluate the performance of the model from some specific slots and carefully design test data with OOV word perturbation to further demonstrate the effectiveness of OOV words. Experiments on two datasets show that our approach outperforms the previous sota methods in terms of both OOV slots and words.
翻译:在实际对话情景中,现有的空档填充模型往往会将实体模式混为一谈,它大大减少了面临外弹道(OOOV)问题的一般化。为了解决这一问题,我们提议了一个基于多层次数据增强的OOOV强势空档填充模型,以解决OOOV问题。我们提出了一个统一的对比学习框架,将原样和增殖样本的表示方式结合起来,使模型能够应对OOOV问题。我们从某些特定空档中评估模型的性能,并仔细设计OOOOOV字扰动测试数据,以进一步证明OOOV字的有效性。对两个数据集的实验表明,我们的方法在OOV的空档和单词方面都超过了以前的 Sota方法。</s>