Detecting out-of-distribution (OOD) data is a task that is receiving an increasing amount of research attention in the domain of deep learning for computer vision. However, the performance of detection methods is generally evaluated on the task in isolation, rather than also considering potential downstream tasks in tandem. In this work, we examine selective classification in the presence of OOD data (SCOD). That is to say, the motivation for detecting OOD samples is to reject them so their impact on the quality of predictions is reduced. We show under this task specification, that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection. This is because it is no longer an issue to conflate in-distribution (ID) data with OOD data if the ID data is going to be misclassified. However, the conflation within ID data of correct and incorrect predictions becomes undesirable. We also propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information such that their ability to identify OOD samples is improved without sacrificing separation between correct and incorrect ID predictions. Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD, whilst existing OOD detection methods fail to do so.
翻译:检测传播(OOD)数据是一项任务,在深入学习计算机视觉领域,发现传播(OOOD)数据正在获得越来越多的研究关注,然而,检测方法的绩效一般是在孤立的情况下对任务进行评估,而不是同时考虑潜在的下游任务。在这项工作中,我们在OOD数据(SCOD)中检查选择性分类。也就是说,检测OOOD样本的动机是拒绝这些数据,从而降低其对预测质量的影响。我们在这一任务规格下显示,现有的热后方法与仅对OOOD检测进行评估时相比,其效果大不相同。这是因为,如果ID数据被错误地分解,检测方法的绩效就不再是将OOOD数据与OOD数据混在一起的问题。然而,正确和不正确的预测在ID数据内部的混杂情况变得不可取。我们还提出了SCOD“软性信息保存组合”(SIRC)的新方法,即基于软式数据的信任分数,其特性信息表明,它们识别OOOD样本的能力正在改进,因此,在不牺牲广泛探测(ID)OD)网络和不正确和不准确的SIM结构之间不断的分类分析。</s>