Out-of-distribution detection is a common issue in deploying vision models in practice and solving it is an essential building block in safety critical applications. Existing OOD detection solutions focus on improving the OOD robustness of a classification model trained exclusively on in-distribution (ID) data. In this work, we take a different approach and propose to leverage generic pre-trained representations. We first investigate the behaviour of simple classifiers built on top of such representations and show striking performance gains compared to the ID trained representations. We propose a novel OOD method, called GROOD, that achieves excellent performance, predicated by the use of a good generic representation. Only a trivial training process is required for adapting GROOD to a particular problem. The method is simple, general, efficient, calibrated and with only a few hyper-parameters. The method achieves state-of-the-art performance on a number of OOD benchmarks, reaching near perfect performance on several of them. The source code is available at https://github.com/vojirt/GROOD.
翻译:利用通用表示进行校准的超范围检测
超范围(Out-of-distribution)检测是在实际部署视觉模型时经常遇到的问题,解决这个问题是安全关键应用的基本构建块。现有的超范围解决方案集中于提高仅在范围内(In-distribution)数据上训练的分类模型的超范围鲁棒性。在这项工作中,我们采取了不同的方法,提出利用通用预训练表示。我们首先研究了建立在这种表示基础上的简单分类器的行为,并展示了与 ID 训练表示相比惊人的性能提升。我们提出了一种新的超范围方法,称为 GROOD,它取得了出色的性能,这是因为使用了良好的通用表示。只需要微不足道的训练过程即可将 GROOD 适应特定的问题。这种方法简单、普遍、高效、校准且只有少量超参数。该方法在多个超范围基准测试中实现了最先进的性能,在其中几个基准测试中达到了接近完美的性能。源代码可在 https://github.com/vojirt/GROOD 获取。