Neural networks (NNs) are widely used for object classification in autonomous driving. However, NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data. A mechanism to detect OOD samples is important for safety-critical applications, such as automotive perception, to trigger a safe fallback mode. NNs often rely on softmax normalization for confidence estimation, which can lead to high confidences being assigned to OOD samples, thus hindering the detection of failures. This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference. The latter property is especially important in automotive applications with limited computational resources and real-time constraints. Our proposed approach outperforms state-of-the-art methods on real-world automotive datasets.
翻译:神经网络(NNs)被广泛用于自动驾驶的物体分类,但是,NNs在培训数据集(称为分配之外的(OOOD)数据)没有很好体现的输入数据方面可能失败,检测OOD样本的机制对于安全关键应用(如汽车感知)非常重要,可以触发安全后退模式。NNs通常依靠软式标准来估计信心,这可能导致对OOOD样本的高度信任,从而妨碍对失败的检测。本文提出了一个方法,用以确定输入是否为OOOD,这在培训期间不需要OOD数据,而不会增加推断的计算成本。后一种特性在计算资源和实时限制有限的汽车应用中特别重要。我们提出的方法在现实世界汽车数据集中优于最先进的方法。