Out-of-distribution (OOD) detection is essential to prevent anomalous inputs from causing a model to fail during deployment. While improved OOD detection methods have emerged, they often rely on the final layer outputs and require a full feedforward pass for any given input. In this paper, we propose a novel framework, multi-level out-of-distribution detection MOOD, which exploits intermediate classifier outputs for dynamic and efficient OOD inference. We explore and establish a direct relationship between the OOD data complexity and optimal exit level, and show that easy OOD examples can be effectively detected early without propagating to deeper layers. At each exit, the OOD examples can be distinguished through our proposed adjusted energy score, which is both empirically and theoretically suitable for networks with multiple classifiers. We extensively evaluate MOOD across 10 OOD datasets spanning a wide range of complexities. Experiments demonstrate that MOOD achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance.
翻译:虽然改进的OOD检测方法已经出现,但它们往往依赖最终的层产出,并且要求对任何特定输入都有一个完全的进料前传。在本文件中,我们提出了一个新的框架,即多层次的分发外探测MOD,利用中间分类输出进行动态和有效的OOD推断。我们探索并建立了OOD数据复杂性和最佳退出水平之间的直接关系,并表明可以在不向更深层传播的情况下及早有效发现OOOD实例。在每一个出口,OOOD实例可以通过我们提议的调整能源评分加以区分,该评分在经验上和理论上都适合于多个分类者的网络。我们广泛评价10个OOOD数据集的多层次分配外探测。实验表明MOD在广泛复杂度上达到71.05%的计算减少误判率,同时保持竞争性的OOD检测性表现。