Detecting out-of-distribution (OOD) samples plays a key role in open-world and safety-critical applications such as autonomous systems and healthcare. Recently, self-supervised representation learning techniques (via contrastive learning and pretext learning) have shown effective in improving OOD detection. However, one major issue with such approaches is the choice of shifting transformations and pretext tasks which depends on the in-domain distribution. In this paper, we propose a simple framework that leverages a shifting transformation learning setting for learning multiple shifted representations of the training set for improved OOD detection. To address the problem of selecting optimal shifting transformation and pretext tasks, we propose a simple mechanism for automatically selecting the transformations and modulating their effect on representation learning without requiring any OOD training samples. In extensive experiments, we show that our simple framework outperforms state-of-the-art OOD detection models on several image datasets. We also characterize the criteria for a desirable OOD detector for real-world applications and demonstrate the efficacy of our proposed technique against state-of-the-art OOD detection techniques.
翻译:在开放世界和安全关键应用(如自主系统和保健)中,检测分发样本起着关键作用。最近,自我监督的代表学习技术(通过对比学习和托辞学习)在改进OOD检测方面显示有效。然而,这种方法的一个主要问题是选择变化转换和托辞任务,这取决于内部分布。在本文件中,我们提出了一个简单的框架,利用一个转变学习环境,学习改进OOD检测培训成套培训的多变表现。为了解决选择最佳转换和托辞任务的问题,我们提出了一个简单机制,自动选择转型和调整其对代表学习的影响,而不需要OOD培训样本。在广泛的实验中,我们表明我们的简单框架超越了几个图像数据集上的最新OOD检测模型。我们还确定了用于现实世界应用的适当OOD检测器的标准,并展示了我们提议的技术相对于OOD检测新技术的功效。