Deep neural networks (DNN) have outstanding performance in various applications. Despite numerous efforts of the research community, out-of-distribution (OOD) samples remain a significant limitation of DNN classifiers. The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots. Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions. Unfortunately, this method frequently fails, because DNNs are not trained to reduce their confidence for OOD inputs. In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures. The proposed method does not introduce a high computational overhead due to the binary representation of the activation patterns extracted from convolutional layers. The extensive empirical evaluation proves its high performance on various DNN architectures and seven image datasets.
翻译:深度神经网络 (DNN) 在各种应用中具有出色的性能。尽管研究界做出了许多努力,但样本分布异常(OOD)仍然是 DNN 分类器的一个重大限制。能够将以前未见过的输入标识为新颖对于安全关键应用(如自动驾驶汽车、无人机和机器人)至关重要。现有的检测OOD样本的方法将DNN视为黑盒,并评估输出预测的置信度分数。不幸的是,这种方法经常失败,因为DNN不被训练以减少对OOD输入的置信度。在这项工作中,我们介绍了一种新的OOD检测方法。我们的方法受到基于ReLU的体系结构神经元激活模式(NAP)的理论分析的启发。由于卷积层中提取的激活模式的二进制表示,所以我们提出的方法不会引入高计算开销。广泛的经验评估证明了它在各种DNN体系结构和七种图像数据集上的高性能。