Detecting Out-of-distribution (OOD) inputs have been a critical issue for neural networks in the open world. However, the unstable behavior of OOD detection along the optimization trajectory during training has not been explored clearly. In this paper, we first find the performance of OOD detection suffers from overfitting and instability during training: 1) the performance could decrease when the training error is near zero, and 2) the performance would vary sharply in the final stage of training. Based on our findings, we propose Average of Pruning (AoP), consisting of model averaging and pruning, to mitigate the unstable behaviors. Specifically, model averaging can help achieve a stable performance by smoothing the landscape, and pruning is certified to eliminate the overfitting by eliminating redundant features. Comprehensive experiments on various datasets and architectures are conducted to verify the effectiveness of our method.
翻译:检测分配(OOD)的输入是开放世界神经网络的一个关键问题,然而,在培训过程中,OOD在优化轨道上的检测行为不稳,尚未明确探讨。在本文件中,我们首先发现OOD在培训期间的检测表现存在超称和不稳定:(1)培训差差差差差差差差差差差差差差差差;(2)培训最后阶段的性能差异很大。根据我们的调查结果,我们提议由平均和运行模型组成的Pruning平均数(AoP),以缓解不稳定行为。具体地说,模型平均能通过平滑地貌来帮助实现稳定的绩效,通过消除冗余特征来消除超称。对各种数据集和结构进行了全面实验,以核实我们方法的有效性。</s>