Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data. Previous progress has introduced various approaches where the in-distribution training data and even several OOD examples are prerequisites. However, due to privacy and security, auxiliary data tends to be impractical in a real-world scenario. In this paper, we propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR), which utilizes image impressions from the fixed model to recover class-conditional feature statistics. Based on that, we introduce Integral Probability Metrics to estimate layer-wise class-conditional deviations and obtain layer weights by Measuring Gradient-based Importance (MGI). The experiments verify the effectiveness of our method and indicate that C2IR outperforms other post-hoc methods and reaches comparable performance to the full access (ID and OOD) detection method, especially in the far-OOD dataset (SVHN).
翻译:本文致力于提高标准深度神经网络的能力来区分异常输入和原始训练数据,旨在实现未知分布(OOD)的检测。目前的研究采用了各种方法,在其中训练数据和一些OOD数据的准备工作是前提条件。但是,由于隐私和安全保障原因,辅助数据在实际应用中往往是不可行的。本文提出了一种基于固定模型的无需训练自然数据的数据无法方法,称为类别条件印象重现(C2IR)。C2IR旨在通过从固定模型获取的图像印象来恢复类别条件特征统计量,并由此引入积分概率度量来估计层内的类别条件偏差,并通过基于梯度重要性(MGI)来确定层权重。实验验证了我们的方法的有效性,并表明C2IR优于其他后期处理方法,并达到与完全访问(ID和OOD)检测方法相当的性能,特别是在远-OOD数据集(SVHN)中。