Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on real outlier datasets for model regularization, which can be costly and sometimes infeasible to obtain in practice. In this paper, we present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training. Specifically, VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. Alongside, we introduce a novel unknown-aware training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. VOS achieves state-of-the-art performance on both object detection and image classification models, reducing the FPR95 by up to 7.87% compared to the previous best method. Code is available at https://github.com/deeplearning-wisc/vos.
翻译:最近,由于在安全部署神经网络方面的重要性,传播外(OOD)探测工作最近受到了很多关注。关键的挑战之一是模型缺乏来自未知数据的监督信号,因此,能够对OOD数据作出过于自信的预测。以前的方法依靠真实的外部数据集来进行模型正规化,这成本高昂,有时在实践中难以获得。本文介绍了VOS,这是一个通过适应性合成虚拟外(虚拟外(OOOD))探测OOD的新框架,在培训期间能够有意义地规范模型的决定界限。具体地说,VOS样本来自在特征空间估计的等级条件分布低的低类似区域虚拟外(VOS)。此外,我们引入了一个新的未知培训目标,这与ID数据和合成外数据之间的不确定性空间形成鲜明对比。VOS在对象探测和图像分类模型上都取得了最先进的性能,将FPR95比先前的最佳方法减少7.87 %。代码可在 https://githhubub.com/depwistests.