The core of out-of-distribution (OOD) detection is to learn the in-distribution (ID) representation, which is distinguishable from OOD samples. Previous work applied recognition-based methods to learn the ID features, which tend to learn shortcuts instead of comprehensive representations. In this work, we find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly. We deeply explore the main contributors of OOD detection and find that reconstruction-based pretext tasks have the potential to provide a generally applicable and efficacious prior, which benefits the model in learning intrinsic data distributions of the ID dataset. Specifically, we take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD). Without bells and whistles, MOOD outperforms previous SOTA of one-class OOD detection by 5.7%, multi-class OOD detection by 3.0%, and near-distribution OOD detection by 2.1%. It even defeats the 10-shot-per-class outlier exposure OOD detection, although we do not include any OOD samples for our detection
翻译:在这项工作中,我们惊讶地发现,仅仅使用重建方法就能大大促进OOD探测工作的效果。我们深入探索OOD探测工作的主要贡献者,发现基于重建的借口任务有可能提供一个普遍适用和有效的前期,这有利于学习ID数据集内在数据分布的模型。具体地说,我们采用以识别为基础的模型模型作为OOD探测框架的借口任务。没有钟声和哨声,MOD比以前SOD检测的一流OD检测结果高出SODTA5.7%,多级OOD检测结果超过3.0%,OOOD检测结果接近分布。它甚至挫败了10个每级外接触OD的检测结果,尽管我们没有将OOD样本列入我们的检测结果。