Numerous Out-of-Distribution (OOD) detection algorithms have been developed to identify unknown samples or objects in real-world model deployments. Outlier Exposure (OE) algorithms, a subset of these methods, typically employ auxiliary datasets to train OOD detectors, enhancing the reliability of their predictions. While previous methods have leveraged Stable Diffusion (SD) to generate pixel-space outliers, these can complicate network optimization. We propose an Outlier Aware Learning (OAL) framework, which synthesizes OOD training data directly in the latent space. To regularize the model's decision boundary, we introduce a mutual information-based contrastive learning approach that amplifies the distinction between In-Distribution (ID) and collected OOD features. The efficacy of this contrastive learning technique is supported by both theoretical analysis and empirical results. Furthermore, we integrate knowledge distillation into our framework to preserve in-distribution classification accuracy. The combined application of contrastive learning and knowledge distillation substantially improves OOD detection performance, enabling OAL to outperform other OE methods by a considerable margin. Source code is available at: \url{https://github.com/HengGao12/OAL}.
翻译:暂无翻译