Level 5 autonomy for self-driving cars requires a robust visual perception system that can parse input images under any visual condition. However, existing semantic segmentation datasets are either dominated by images captured under normal conditions or are small in scale. To address this, we introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. ACDC consists of a large set of 4006 images which are equally distributed between four common adverse conditions: fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality fine pixel-level semantic annotation, a corresponding image of the same scene taken under normal conditions, and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content. Thus, ACDC supports both standard semantic segmentation and the newly introduced uncertainty-aware semantic segmentation. A detailed empirical study demonstrates the challenges that the adverse domains of ACDC pose to state-of-the-art supervised and unsupervised approaches and indicates the value of our dataset in steering future progress in the field. Our dataset and benchmark are publicly available.
翻译:自驾驶汽车的5级自主性要求有一个强大的视觉感知系统,可以在任何视觉条件下分析输入图像。然而,现有的语义分解数据集要么由正常条件下捕获的图像主导,要么规模小。为此,我们引入了ACDC, 带有对应语义的不利条件数据集,用于培训和测试关于不利视觉条件的语义分解方法。ACDC 由一大套4006的图像组成,在四个常见的不利条件(雾、夜间、雨和雪)之间平等分布。每张不良条件图像都带有高质量的像素级语义说明、在正常条件下拍摄的相同场景的相应图像,以及一个区分有清晰和不确定语义内容的图像内部区域之间的双面遮罩。因此,ACDC支持标准语义分解和新引入的有不确定性的语义分解。详细的经验研究表明,ACDC的不利领域对状态、受监督和未受监督的语义分解方法构成的挑战,并显示我们今后数据方向中的现有基准值。