Deep neural networks are susceptible to generating overconfident yet erroneous predictions when presented with data beyond known concepts. This challenge underscores the importance of detecting out-of-distribution (OOD) samples in the open world. In this work, we propose a novel feature-space OOD detection score that jointly reasons with both class-specific and class-agnostic information. Specifically, our approach utilizes Whitened Linear Discriminant Analysis to project features into two subspaces - the discriminative and residual subspaces - in which the ID classes are maximally separated and closely clustered, respectively. The OOD score is then determined by combining the deviation from the input data to the ID distribution in both subspaces. The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark, with six OOD datasets that covers a variety of distribution shifts. WDiscOOD demonstrates superior performance on deep classifiers with diverse backbone architectures, including CNN and vision transformer. Furthermore, we also show that our method can more effectively detect novel concepts in representation space trained with contrastive objectives, including supervised contrastive loss and multi-modality contrastive loss.
翻译:深度神经网络在处理未知概念数据时容易生成过度自信的但错误的预测结果。这一挑战凸显了在开放世界中检测未知概念数据(OOD)的重要性。在本文中,我们提出了一种新颖的基于特征空间的OOD检测得分,它同时结合了类别特定和类别不可知信息。具体而言,我们的方法利用白化线性判别分析将特征投影到两个子空间中,即判别性子空间和残差子空间,在这两个子空间中,ID类别分离最大且紧密聚类。然后,OOD得分通过将输入数据在两个子空间中与ID分布之间的差异相结合而确定。我们的方法名为WDiscOOD,通过在大规模ImageNet-1k基准测试数据集上验证,同时使用包括六个OOD数据集在内的各种分布变化进行了验证。WDiscOOD证明了其在CNN和视觉变压器等不同backbone架构的深度分类器上表现卓越。此外,我们还展示了我们的方法可以更有效地在使用对比度目标训练的表示空间中检测新的概念,包括监督对比度损失和多模式对比度损失。