Many high-performing works on out-of-distribution (OOD) detection use real or synthetically generated outlier data to regularise model confidence; however, they often require retraining of the base network or specialised model architectures. Our work demonstrates that Noisy Inliers Make Great Outliers (NIMGO) in the challenging field of OOD object detection. We hypothesise that synthetic outliers need only be minimally perturbed variants of the in-distribution (ID) data in order to train a discriminator to identify OOD samples -- without expensive retraining of the base network. To test our hypothesis, we generate a synthetic outlier set by applying an additive-noise perturbation to ID samples at the image or bounding-box level. An auxiliary feature monitoring multilayer perceptron (MLP) is then trained to detect OOD feature representations using the perturbed ID samples as a proxy. During testing, we demonstrate that the auxiliary MLP distinguishes ID samples from OOD samples at a state-of-the-art level, reducing the false positive rate by more than 20\% (absolute) over the previous state-of-the-art on the OpenImages dataset. Extensive additional ablations provide empirical evidence in support of our hypothesis.
翻译:许多关于分配外(OOD)检测的高性能作品使用真实或合成产生的外生数据,使模型信心正常化;然而,它们往往要求对基础网络或专门模型结构进行再培训。我们的工作表明,Noisy Inliers Make Great OOD物体检测这一具有挑战性的领域,Noisy Inliers Make Great Expliers (NIMGO) 显示,合成外生(NNIPGO) 在OOD物体检测方面,我们假设合成外生(OOOOD)数据只需要最低限度的扰动变异(ID)数据,以便训练一名导师识别OOOD样品 -- -- 而不对基地网络进行昂贵的再培训。为了检验我们的假设,我们通过在图像或捆绑箱一级对ID样品应用添加添加的扰动剂渗透性干扰来生成合成外生值。一个辅助性特征监测多层显示器(NMLPP)随后接受了培训,以便利用受扰的ID ID 样本作为替代。测试期间,我们证明辅助的 MLPPPP将身份样本与OD样本在最先进的多层次上提供更多数据支持。