Deep neural network (DNN), especially convolutional neural network, has achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method is theoretically ineffective and practically breakable because of dimensionality reduction in the model. We also show that Glow likelihood-based OOD detection is ineffective as well. Our analysis is demonstrated on five open datasets, including a COVID-19 CT dataset. At last, we present a simple theoretical solution with guaranteed performance for OOD detection.
翻译:深神经网络(DNN),特别是进化神经网络,在图像分类任务上取得了优异的成绩,然而,只有在对培训模型的投入与培训样本相似的情况下,这种表现才得到保证,即投入与培训成套材料的概率分布相似,即投入与培训成套材料的概率分布相同。外部分布(OOOD)样本并不遵循培训成套材料的分布,因此OOD样本的预测等级标签变得毫无意义。提出了基于分类的OOOD检测方法;然而,在本研究报告中,我们表明,由于模型的维度减少,这种方法在理论上是无效的,实际上是可以打破的。我们还表明,基于Glow概率的OOD检测也是无效的。我们的分析在五个开放数据集上,包括COVID-19 CT数据集上都得到了证明。最后,我们提出了一种简单的理论解决方案,保证OOD检测的性能。