Out-of-distribution (OOD) detection is a crucial aspect of deploying machine learning models in open-world applications. Empirical evidence suggests that training with auxiliary outliers substantially improves OOD detection. However, such outliers typically exhibit a distribution gap compared to the test OOD data and do not cover all possible test OOD scenarios. Additionally, incorporating these outliers introduces additional training burdens. In this paper, we introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance. While this paradigm is efficient, it also presents challenges such as catastrophic forgetting. To address these challenges, we propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective. AUTO adaptively mines pseudo-ID and pseudo-OOD samples from test data, utilizing them to optimize networks in real time during inference. Extensive results on CIFAR-10, CIFAR-100, and ImageNet benchmarks demonstrate that AUTO significantly enhances OOD detection performance.
翻译:超出分布(OOD)检测是在开放式环境中部署机器学习模型的关键方面。经验证据表明,使用辅助离群值可以大大提高OOD检测性能。然而,这些离群值通常与测试OOD数据存在分布差异,并且不能涵盖所有可能的测试OOD情况。此外,引入这些离群值会增加其他训练负担。本文提出了一种称为测试时间OOD检测的新范例,它直接利用未标记的在线数据来改善OOD检测性能。虽然这种范例是高效的,但它也面临着诸如灾难性遗忘等挑战。为应对这些挑战,我们提出了自适应离群值优化(AUTO),它由入出感知过滤器、ID内存库和语义一致的目标组成。AUTO自适应地从测试数据中开采伪ID和伪OOD样本,利用它们实时优化网络在推断过程中的性能。CIFAR-10、CIFAR-100和ImageNet基准测试的广泛结果表明,AUTO显著提高了OOD检测性能。