Recent works on predictive uncertainty estimation have shown promising results on Out-Of-Distribution (OOD) detection for semantic segmentation. However, these methods struggle to precisely locate the point of interest in the image, i.e, the anomaly. This limitation is due to the difficulty of finegrained prediction at the pixel level. To address this issue, we build upon the recent ObsNet approach by providing object instance knowledge to the observer. We extend ObsNet by harnessing an instance-wise mask prediction. We use an additional, class agnostic, object detector to filter and aggregate observer predictions. Finally, we predict an unique anomaly score for each instance in the image. We show that our proposed method accurately disentangles in-distribution objects from OOD objects on three datasets.
翻译:预测性不确定性估算的近期工作显示,在为语义分解而探测流出(OOOD)方面,发现出令人乐观的结果。 但是,这些方法很难准确地确定图像的受关注点, 即异常点。 这一局限性是由于微微微预测在像素层面的困难造成的。 为了解决这一问题,我们利用最近的ObsNet方法, 向观察者提供对象实例实例实例知识。 我们利用实例化掩码预测扩展 ObsNet 。 我们用另外的、 等级性、 对象探测器来过滤和汇总观察者预测。 最后, 我们预测了图像中每个实例的独特异常分数。 我们用三个数据集来准确地区分OOD对象的分布对象。