Within the context of autonomous driving, safety-related metrics for deep neural networks have been widely studied for image classification and object detection. In this paper, we further consider safety-aware correctness and robustness metrics specialized for semantic segmentation. The novelty of our proposal is to move beyond pixel-level metrics: Given two images with each having N pixels being class-flipped, the designed metrics should, depending on the clustering of pixels being class-flipped or the location of occurrence, reflect a different level of safety criticality. The result evaluated on an autonomous driving dataset demonstrates the validity and practicality of our proposed methodology.
翻译:在自主驱动的背景下,对深神经网络的安全相关计量标准进行了广泛的研究,以进行图像分类和物体探测;在本文件中,我们进一步考虑了专门用于语义分解的安全认知正确性和稳健性计量标准;我们提案的新颖之处是超越像素级的计量标准:鉴于两个图像,每个图像的像素都被分级扩散,设计计量标准应当视像素的组合是类排减或发生地点而定,反映不同程度的安全临界性;在自动驱动数据集上评价的结果显示了我们拟议方法的有效性和实用性。