While Out-of-distribution (OOD) detection has been well explored in computer vision, there have been relatively few prior attempts in OOD detection for NLP classification. In this paper we argue that these prior attempts do not fully address the OOD problem and may suffer from data leakage and poor calibration of the resulting models. We present PnPOOD, a data augmentation technique to perform OOD detection via out-of-domain sample generation using the recently proposed Plug and Play Language Model (Dathathri et al., 2020). Our method generates high quality discriminative samples close to the class boundaries, resulting in accurate OOD detection at test time. We demonstrate that our model outperforms prior models on OOD sample detection, and exhibits lower calibration error on the 20 newsgroup text and Stanford Sentiment Treebank dataset (Lang, 1995; Socheret al., 2013). We further highlight an important data leakage issue with datasets used in prior attempts at OOD detection, and share results on a new dataset for OOD detection that does not suffer from the same problem.
翻译:虽然在计算机愿景中已对分配外(OOD)探测进行了很好的探索,但先前为NLP分类而检测OOD的尝试相对较少,在本文中,我们辩称,这些先前的尝试并未完全解决OOD问题,并可能因数据泄漏和由此得出的模型的校准差而受到影响。我们介绍了PnPOOD数据增强技术,即利用最近提议的“插头和玩耍语言模型”(Dahathri等人,2020年)通过外出样本生成OD检测OD的数据增强技术。我们的方法在接近舱界边界的地方生成了高品质的有区别的样本,导致在测试时准确检测OOOD。我们证明,我们的模型在OOD样本检测方面比先前的模型完善,在20个新闻组文本和斯坦福Senstiment Treebank数据集(Lang,1995年;Socheret等人,2013年)。我们进一步强调了一个重要的数据渗漏问题,即以前在OD检测尝试中使用的数据集,并分享关于检测OOD新数据集的结果,但没有同样的问题。