Deep neural network, despite its remarkable capability of discriminating targeted in-distribution samples, shows poor performance on detecting anomalous out-of-distribution data. To address this defect, state-of-the-art solutions choose to train deep networks on an auxiliary dataset of outliers. Various training criteria for these auxiliary outliers are proposed based on heuristic intuitions. However, we find that these intuitively designed outlier training criteria can hurt in-distribution learning and eventually lead to inferior performance. To this end, we identify three causes of the in-distribution incompatibility: contradictory gradient, false likelihood, and distribution shift. Based on our new understandings, we propose a new out-of-distribution detection method by adapting both the top-design of deep models and the loss function. Our method achieves in-distribution compatibility by pursuing less interference with the probabilistic characteristic of in-distribution features. On several benchmarks, our method not only achieves the state-of-the-art out-of-distribution detection performance but also improves the in-distribution accuracy.
翻译:深心神经网络,尽管它具有显著的区别性分布样本,但在检测异常分布外数据方面表现不佳。为了解决这一缺陷,最先进的解决方案选择了在外部输出的辅助数据集上培训深度网络。根据重力直觉,为这些辅助外部输出者提出了各种培训标准。然而,我们发现,这些直觉设计的外向培训标准可能会损害分布学习并最终导致性能低下。为此,我们确定了分配不相容的三个原因:相互矛盾的梯度、假可能性和分布转移。根据我们的新理解,我们提出了一种新的分配外检测方法,通过调整深层模型的顶部设计以及损失功能。我们的方法通过减少对分布特征的概率特征的干扰,实现了分配中的兼容性。我们的方法不仅在几个基准上达到了最先进的分配外检测功能,而且还提高了分配的准确性。