Out-of-distribution (OOD) detection is an important task to ensure the reliability and safety of deep learning and the discriminator models outperform others for now. However, the feature extraction of the discriminator models must compress the data and lose certain information, leaving room for bad cases and malicious attacks. In this paper, we provide a new assumption that the discriminator models are more sensitive to some subareas of the input space and such perceptron bias causes bad cases and overconfidence areas. Under this assumption, we design new detection methods and indicator scores. For detection methods, we introduce diffusion models (DMs) into OOD detection. We find that the diffusion denoising process (DDP) of DMs also functions as a novel form of asymmetric interpolation, which is suitable to enhance the input and reduce the overconfidence areas. For indicator scores, we find that the features of the discriminator models of OOD inputs occur sharp changes under DDP and use the norm of this dynamic change as our indicator scores. Therefore, we develop a new framework to combine the discriminator and generation models to do OOD detection under our new assumption. The discriminator models provide proper detection spaces and the generation models reduce the overconfidence problem. According to our experiments on CIFAR10 and CIFAR100, our methods get competitive results with state-of-the-art methods. Our implementation is available at https://github.com/luping-liu/DiffOOD.
翻译:在本文中,我们提供了一个新的假设,即歧视模式对输入空间的某些子领域更为敏感,而这种偏向性偏差导致不良案例和过度信任领域。根据这一假设,我们设计了新的检测方法和指标分数。关于检测方法,我们将传播模型(DMs)引入OOD检测。我们发现,DMs的传播分解过程(DDP)也是一种新型的不对称内插形式,适合加强投入和减少过度信任领域。关于指标分数,我们发现,在DDP下,歧视模型的特点发生急剧变化,并使用这种动态变化的规范作为我们的指标分数。因此,我们开发了一个新的框架,将歧视模型和生成模型结合到ODOD的检测中。我们认为,DDPs的分散过程(DDPs)也是一种新型的非对等互换互换的互换过程,适合加强投入,减少过度信任领域。关于指标分解模式的特征,我们FAR-DO的生成模型提供了适当的空间和RAHRO的生成方法。