Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs. This limitation is one of the key challenges in the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis. This challenge has received significant attention recently, and several techniques have been developed to detect inputs where the model's prediction cannot be trusted. These techniques use different statistical, geometric, or topological signatures. This paper presents a taxonomy of OOD outlier inputs based on their source and nature of uncertainty. We demonstrate how different existing detection approaches fail to detect certain types of outliers. We utilize these insights to develop a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers. Our results include experiments on CIFAR10, SVHN and MNIST as in-distribution data and Imagenet, LSUN, SVHN (for CIFAR10), CIFAR10 (for SVHN), KMNIST, and F-MNIST as OOD data across different DNN architectures such as ResNet34, WideResNet, DenseNet, and LeNet5.
翻译:众所周知,深心神经网络(DNNS)产生不正确的预测,对分配之外的投入具有高度信心,这种局限性是高保障系统中采用深度学习模型(如自主驾驶、空中交通管理和医疗诊断)的关键挑战之一,这一挑战最近受到极大关注,已经开发出若干技术,以便在模型预测无法信赖的地方检测投入。这些技术使用不同的统计、几何或地形特征。本文根据OODD的源头和不确定性性质,对OOD的外部投入进行了分类。我们展示了现有的不同探测方法如何未能发现某些类型的外部用户。我们利用这些洞察方法开发一种新型的综合检测方法,使用与不同类型外部用户相应的多种属性。我们的成果包括对CIFAR10、SVHN和MNIST的实验,如分布数据和图像网、LSUN、SVHN(针对CIFAR10)、CIFAR10(针对SVHNN)、KMNIS、KMNIST和FMNIST作为不同DOD数据的实验,如ResNet、LYNet、LANSNet和DISNet。