Recent work on Observer Network has shown promising results on Out-Of-Distribution (OOD) detection for semantic segmentation. These methods have difficulty in precisely locating the point of interest in the image, i.e, the anomaly. This limitation is due to the difficulty of fine-grained prediction at the pixel level. To address this issue, we provide instance knowledge to the observer. We extend the approach of ObsNet by harnessing an instance-wise mask prediction. We use an additional, class agnostic, object detector to filter and aggregate observer predictions. Finally, we predict an unique anomaly score for each instance in the image. We show that our proposed method accurately disentangle in-distribution objects from Out-Of-Distribution objects on three datasets.
翻译:观察家网络(OOD)的近期工作显示,在为语义分解而探测流出(OOD)方面,在探测流出(OOOD)方面取得了有希望的结果。这些方法很难准确地确定图像的利害点,即异常点。这种局限性是由于难以在像素层面进行微小的预测。为了解决这个问题,我们向观察家提供实例知识。我们利用实例智慧的掩码预测来推广ObsNet的方法。我们用另外一种等级的、不可知性的物体探测器来过滤和汇总观察家的预测。最后,我们预测图像中每个例子都有一个独特的异常点。我们显示,我们提出的方法准确地将三个数据集的流出(ObsNet)对象从分布对象中分离出来。