Out-of-distribution (OOD) detection is a crucial task for ensuring the reliability and safety of deep learning. Currently, discriminator models outperform other methods in this regard. However, the feature extraction process used by discriminator models suffers from the loss of critical information, leaving room for bad cases and malicious attacks. In this paper, we introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem. To address this issue, we propose a novel framework that combines discriminator and generation models and integrates diffusion models (DMs) into OOD detection. We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric interpolation, which is well-suited to enhance the input and mitigate the overconfidence problem. The discriminator model features of OOD data exhibit sharp changes under DDP, and we utilize the norm of this change as the indicator score. Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches. Notably, for the challenging InD ImageNet and OOD species datasets, our method achieves an AUROC of 85.7, surpassing the previous SOTA method's score of 77.4. Our implementation is available at \url{https://github.com/luping-liu/DiffOOD}.
翻译:暂无翻译