Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system, which aims to identify whether a query falls outside the predefined supported intent set. Previous softmax-based detection algorithms are proved to be overconfident for OOD samples. In this paper, we analyze overconfident OOD comes from distribution uncertainty due to the mismatch between the training and test distributions, which makes the model can't confidently make predictions thus probably causing abnormal softmax scores. We propose a Bayesian OOD detection framework to calibrate distribution uncertainty using Monte-Carlo Dropout. Our method is flexible and easily pluggable into existing softmax-based baselines and gains 33.33\% OOD F1 improvements with increasing only 0.41\% inference time compared to MSP. Further analyses show the effectiveness of Bayesian learning for OOD detection.
翻译:以任务为导向的对话系统(OOOD)外探测是任务导向对话系统的一个关键组成部分,该系统旨在确定一个查询是否在预先定义的支持意图之外。 以往基于软式马克思的检测算法已证明对 OOD 样本过于自信。 在本文中, 我们分析过度自信的 OOD 是因为培训和测试分布不匹配, 使得模型无法有信心地作出预测, 从而可能导致异常软式评分。 我们建议建立一个Bayesian OOD 检测框架, 以使用 Monte- Carlo 丢弃来校准分布不确定性 。 我们的方法灵活易变, 容易插入基于软式的基线, 并获得33.333 ⁇ OODD F1 的改进, 与MSP相比, 只增加了0. 41 ⁇ ⁇ 。 进一步的分析显示 Bayesian 学习 OOD 检测的效果 。