Safety-critical applications like autonomous driving use Deep Neural Networks (DNNs) for object detection and segmentation. The DNNs fail to predict when they observe an Out-of-Distribution (OOD) input leading to catastrophic consequences. Existing OOD detection methods were extensively studied for image inputs but have not been explored much for LiDAR inputs. So in this study, we proposed two datasets for benchmarking OOD detection in 3D semantic segmentation. We used Maximum Softmax Probability and Entropy scores generated using Deep Ensembles and Flipout versions of RandLA-Net as OOD scores. We observed that Deep Ensembles out perform Flipout model in OOD detection with greater AUROC scores for both datasets.
翻译:安全关键应用,如自主驾驶使用深神经网络(DNN)进行物体探测和分割。 DNN未能预测何时观测出一个外扩散输入,导致灾难性后果。对现有OOOD探测方法进行了广泛研究,用于图像输入,但对于LiDAR输入没有进行大量探讨。因此,在本研究中,我们提出了两个数据集,用于在3D 语义分割中将OOD探测基准化。我们用Eep Ensembles和Flipout版本的RandLA-Net生成的最大软体概率和导体分数作为OOD分数。我们观察到,DEep Ensembles在OOD探测中用更大的AUROC分数来进行流出模型。