The inability of state-of-the-art semantic segmentation methods to detect anomaly instances hinders them from being deployed in safety-critical and complex applications, such as autonomous driving. Recent approaches have focused on either leveraging segmentation uncertainty to identify anomalous areas or re-synthesizing the image from the semantic label map to find dissimilarities with the input image. In this work, we demonstrate that these two methodologies contain complementary information and can be combined to produce robust predictions for anomaly segmentation. We present a pixel-wise anomaly detection framework that uses uncertainty maps to improve over existing re-synthesis methods in finding dissimilarities between the input and generated images. Our approach works as a general framework around already trained segmentation networks, which ensures anomaly detection without compromising segmentation accuracy, while significantly outperforming all similar methods. Top-2 performance across a range of different anomaly datasets shows the robustness of our approach to handling different anomaly instances.
翻译:在这项工作中,我们证明这两种方法都包含补充信息,可以结合为异常分解提供可靠的预测。我们提出了一个像素错误检测框架,利用不确定性地图改进现有的再合成方法,以发现输入和生成图像之间的异同。我们的方法是围绕已经受过训练的分解网络作为一个总体框架运行,以确保在不破坏分解准确性的情况下发现异常,同时大大超过所有类似的方法。在一系列不同的异常数据集中,上二级的性能显示了我们处理不同异常情况的方法的稳健性。