Out-of-distribution (OOD) detection has attracted a large amount of attention from the machine learning research community in recent years due to its importance in deployed systems. Most of the previous studies focused on the detection of OOD samples in the multi-class classification task. However, OOD detection in the multi-label classification task remains an underexplored domain. In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task. Object detection models have an inherent ability to distinguish between objects of interest (in-distribution) and irrelevant objects (e.g., OOD objects) on images that contain multiple objects from different categories. These abilities allow us to convert a regular object detection model into an image classifier with inherent OOD detection capabilities with just minor changes. We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets.
翻译:近年来,机器学习研究界因其在部署系统中的重要性而大量关注分配外探测,这引起了机器学习研究界的注意。以往的大多数研究侧重于在多级分类任务中检测OOD样品。然而,多标签分类任务中的OOOD探测仍然是一个探索不足的领域。我们建议YolOOD——一种在多标签分类任务中利用物体探测域的概念进行OOD探测的方法。物体探测模型具有内在能力,可以区分不同类别中含有多个物体的图象的感兴趣对象(在分配中)和无关对象(例如OOOD对象)。这些能力使我们能够将常规物体探测模型转换成图像分类器,具有固有的OOOD探测能力,但变化不大。我们比较了最先进的OOD探测方法,并表明YolOOODD有能力在分布和OOOD基准数据集综合组合中超越这些方法。