Out-of-domain (OOD) input detection is vital in a task-oriented dialogue system since the acceptance of unsupported inputs could lead to an incorrect response of the system. This paper proposes OutFlip, a method to generate out-of-domain samples using only in-domain training dataset automatically. A white-box natural language attack method HotFlip is revised to generate out-of-domain samples instead of adversarial examples. Our evaluation results showed that integrating OutFlip-generated out-of-domain samples into the training dataset could significantly improve an intent classification model's out-of-domain detection performance.
翻译:在面向任务的对话系统中,外部外文输入检测至关重要,因为接受无支持的投入可能导致系统反应不正确。本文提出外部翻转,这是一种仅使用内部培训数据集自动生成外部外文样本的方法。对白色框自然语言攻击方法HotFlip进行了修订,以生成外文样本,而不是对抗性实例。我们的评价结果显示,将外部翻转产生的外部域外样本纳入培训数据集可以大大改进意图分类模型的外部域外探测性能。