Out-of-distribution (OOD) detection is an important task in machine learning systems for ensuring their reliability and safety. Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample. However, such models frequently assign a suspiciously high likelihood to a specific outlier. Several recent works have addressed this issue by training a neural network with auxiliary outliers, which are generated by perturbing the input data. In this paper, we discover that these approaches fail for certain OOD datasets. Thus, we suggest a new detection metric that operates without outlier exposure. We observe that our metric is robust to diverse variations of an image compared to the previous outlier-exposing methods. Furthermore, our proposed score requires neither auxiliary models nor additional training. Instead, this paper utilizes the likelihood ratio statistic in a new perspective to extract genuine properties from the given single deep probabilistic generative model. We also apply a novel numerical approximation to enable fast implementation. Finally, we demonstrate comprehensive experiments on various probabilistic generative models and show that our method achieves state-of-the-art performance.
翻译:深度概率基因模型通过估计数据抽样的可能性来帮助检测OOD。然而,这些模型往往会给某个特定的外层带来令人怀疑的高度可能性。最近的一些著作通过培训神经网络,通过干扰输入数据生成的辅助外层数据来解决这个问题。在本文中,我们发现这些方法在某些OOD数据集上失败了。因此,我们建议采用一种新的检测指标,在没有外部暴露的情况下运行。我们观察到,与以前的外部暴露方法相比,我们的测量指标对图像的不同变异具有很强性。此外,我们提议的评分不需要辅助模型或额外培训。相反,本文从新角度利用可能性比率统计,从单一的具有深度概率的基因描述模型中提取真实的属性。我们还采用了新颖的数字近似法,以便能够快速实施。最后,我们展示了各种有可比性的基因描述模型的全面实验,并表明我们的方法达到了最先进的性能。