Neural networks have achieved impressive performance for data in the distribution which is the same as the training set but can produce an overconfident incorrect result for the data these networks have never seen. Therefore, it is essential to detect whether inputs come from out-of-distribution(OOD) in order to guarantee the safety of neural networks deployed in the real world. In this paper, we propose a simple and effective post-hoc technique, WeShort, to reduce the overconfidence of neural networks on OOD data. Our method is inspired by the observation of the internal residual structure, which shows the separation of the OOD and in-distribution (ID) data in the shortcut layer. Our method is compatible with different OOD detection scores and can generalize well to different architectures of networks. We demonstrate our method on various OOD datasets to show its competitive performances and provide reasonable hypotheses to explain why our method works. On the ImageNet benchmark, Weshort achieves state-of-the-art performance on the false positive rate (FPR95) and the area under the receiver operating characteristic (AUROC) on the family of post-hoc methods.
翻译:神经网络在分布数据方面取得了令人印象深刻的性能,这与培训内容相同,但能为这些网络从未见过的数据产生过于自信的不正确结果。因此,必须查明输入是否来自分配之外(OOOD),以保证在现实世界中部署的神经网络的安全。在本文中,我们提出了一个简单而有效的热后技术,即WeShort,以减少神经网络对OOD数据的过度信任。我们的方法来自对内部残留结构的观察,该结构显示OOOD数据与分布(ID)数据在捷径层中的分离。我们的方法与不同的OOD检测分数相兼容,可以向不同的网络结构加以推广。我们在各种OOOD数据集上展示了我们的方法,以显示其竞争性的性能,并提供合理的假设来解释我们的方法为何起作用。在图像网络基准中,Weshort在假正率(FPR95)和接收器操作特征下区域(AUROC)在后方法的家属方面实现了最新业绩。