Deep neural networks are susceptible to generating overconfident yet erroneous predictions when presented with data beyond known concepts. This challenge underscores the importance of detecting out-of-distribution (OOD) samples in the open world. In this work, we propose a novel feature-space OOD detection score that jointly reasons with both class-specific and class-agnostic information. Specifically, our approach utilizes Whitened Linear Discriminative Analysis to project features into two subspaces - the discriminative and residual subspaces - in which the ID classes are maximally separated and closely clustered, respectively. The OOD score is then determined by combining the deviation from the input data to the ID distribution in both subspaces. The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark, with six OOD datasets that covers a variety of distribution shifts. WDiscOOD demonstrates superior performance on deep classifiers with diverse backbone architectures, including CNN and vision transformer. Furthermore, we also show that our method can more effectively detect novel concepts in representation space trained with contrastive objectives, including supervised contrastive loss and multi-modality contrastive loss.
翻译:深度神经网络在提供超出已知概念范围的数据时,很容易产生过于自信但错误的预测。 这一挑战强调了在开放世界中发现分配(OOOD)样本的重要性。 在这项工作中,我们提出一个新的地貌空间OOOD检测评分,与阶级特有和阶级机密信息共同提出理由。 具体地说,我们的方法利用白线和线性差异分析,将项目特征投入两个子空间—— 歧视性和剩余子空间—— ID等级分别最大地分离和紧密地分组。 然后,OOOD得分通过将输入数据偏离两个子空间的ID分布结合起来来确定。 我们的方法,即名为WDiscoOD, 的功效,在大型图像网络-1k基准上得到验证, 6个OOOD数据集涵盖各种分布变换。 WDiscOOD显示, 包括CNN 和视觉变异器在内的不同骨架结构的深层分类的优性能表现。 我们还表明,我们的方法可以更有效地探测到带有对比性目标、包括监督对比损失和多式损失的空域中的新概念。</s>