Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs. However, these methods struggle to detect OOD inputs that share nuisance values (e.g. background) with in-distribution inputs. The detection of shared-nuisance out-of-distribution (SN-OOD) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment. In this work, we provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them. Nuisance-aware OOD detection substitutes a classifier trained via empirical risk minimization and cross-entropy loss with one that 1. is trained under a distribution where the nuisance-label relationship is broken and 2. yields representations that are independent of the nuisance under this distribution, both marginally and conditioned on the label. We can train a classifier to achieve these objectives using Nuisance-Randomized Distillation (NuRD), an algorithm developed for OOD generalization under spurious correlations. Output- and feature-based nuisance-aware OOD detection perform substantially better than their original counterparts, succeeding even when detection based on domain generalization algorithms fails to improve performance.
翻译:利用预测模型的产出或特征表示的方法,已成为发现图像输入不分发(OOOD)检测结果的有希望的方法,然而,这些方法在探测OOD投入中,与分配投入(如背景)共享破坏价值(如背景),与分配投入共享破坏分配(SN-OOOD)投入在现实应用中特别相关,因为异常和分配投入在部署期间往往在同一环境中被捕捉。在这项工作中,我们为SN-OOOOD检测失败提供了可能的解释,并提议为解决这些问题检测结果提出一个建议。Nuisance-aware OOOD检测替代了通过实证风险最小化和交叉消耗损失培训的分类器。在分配过程中,在破坏骚扰标签关系的情况下,在分配过程中,发现异常和分配的投入往往不那么,我们就可以用Nuisance-Randodistilation(NuRDC)来培训一个分类来实现这些目标。 Nuisance-awa-OOOODD检测发现方法取代了通过实证风险最小风险最小化和交叉损失的分类,在常规检测中,对OOOODAVA进行更精确的升级的升级。