State-of-the-art semantic or instance segmentation deep neural networks (DNNs) are usually trained on a closed set of semantic classes. As such, they are ill-equipped to handle previously-unseen objects. However, detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving, especially if they appear on the road ahead. While some methods have tackled the tasks of anomalous or out-of-distribution object segmentation, progress remains slow, in large part due to the lack of solid benchmarks; existing datasets either consist of synthetic data, or suffer from label inconsistencies. In this paper, we bridge this gap by introducing the "SegmentMeIfYouCan" benchmark. Our benchmark addresses two tasks: Anomalous object segmentation, which considers any previously-unseen object category; and road obstacle segmentation, which focuses on any object on the road, may it be known or unknown. We provide two corresponding datasets together with a test suite performing an in-depth method analysis, considering both established pixel-wise performance metrics and recent component-wise ones, which are insensitive to object sizes. We empirically evaluate multiple state-of-the-art baseline methods, including several specifically designed for anomaly / obstacle segmentation, on our datasets as well as on public ones, using our benchmark suite. The anomaly and obstacle segmentation results show that our datasets contribute to the diversity and challengingness of both dataset landscapes.
翻译:虽然有些方法解决了异常或超出分配目标分割的任务,但进展仍然缓慢,主要原因是缺乏坚实的基准;现有的数据集要么是合成数据,要么是标签不一致。因此,我们无法用“固定MeifYouCan”基准来弥补这一差距。我们的基准涉及两项任务:异常的物体分割,考虑任何先前未预见的物体类别;道路障碍分割,侧重于道路上的任何物体,可能为人所知或未知。我们提供了两个相应的数据集,以及一个测试套件,进行深入的方法分析,同时考虑到固定的像素度度度和标签不一致。在本文件中,我们通过引入“固定MeifYouCan”基准来弥补这一差距。我们的基准涉及两个任务:异常的物体分割,考虑任何先前未预见的物体类别;以及道路障碍分割,侧重于道路上的任何物体,可能为人知或未知。我们所了解的两套相应的数据集以及一个测试套,同时进行深入的方法分析,既考虑固定的象数度度,也有标签不一致的标签。在本文中,通过引入“Ssementment MeifYou-real comal ” 数据,具体地为我们所设计的一些数据,我们所设计的直径直观性数据,包括若干次数据。