We consider the problem of detecting OoD(Out-of-Distribution) input data when using deep neural networks, and we propose a simple yet effective way to improve the robustness of several popular OoD detection methods against label shift. Our work is motivated by the observation that most existing OoD detection algorithms consider all training/test data as a whole, regardless of which class entry each input activates (inter-class differences). Through extensive experimentation, we have found that such practice leads to a detector whose performance is sensitive and vulnerable to label shift. To address this issue, we propose a class-wise thresholding scheme that can apply to most existing OoD detection algorithms and can maintain similar OoD detection performance even in the presence of label shift in the test distribution.
翻译:我们考虑了在使用深层神经网络时检测OOD(传播外)输入数据的问题,我们提出了一种简单而有效的方法来提高几种流行OOD检测方法对标签转换的稳健性。我们工作的动力在于观察到大多数现有的OOD检测算法将所有培训/测试数据作为一个整体来考虑,而不管每项输入的输入活动是哪一类输入(不同类别之间差异 ) 。我们通过广泛的实验发现,这种做法导致检测器的性能敏感且易被标签转换。为了解决这一问题,我们提出了一个分级临界值计划,可以适用于大多数现有的OOD检测算法,并且即使在测试分布中存在标签转移的情况下,也可以保持类似的OD检测性能。