Deep neural networks (DNNs) for the semantic segmentation of images are usually trained to operate on a predefined closed set of object classes. This is in contrast to the "open world" setting where DNNs are envisioned to be deployed to. From a functional safety point of view, the ability to detect so-called "out-of-distribution" (OoD) samples, i.e., objects outside of a DNN's semantic space, is crucial for many applications such as automated driving. A natural baseline approach to OoD detection is to threshold on the pixel-wise softmax entropy. We present a two-step procedure that significantly improves that approach. Firstly, we utilize samples from the COCO dataset as OoD proxy and introduce a second training objective to maximize the softmax entropy on these samples. Starting from pretrained semantic segmentation networks we re-train a number of DNNs on different in-distribution datasets and consistently observe improved OoD detection performance when evaluating on completely disjoint OoD datasets. Secondly, we perform a transparent post-processing step to discard false positive OoD samples by so-called "meta classification". To this end, we apply linear models to a set of hand-crafted metrics derived from the DNN's softmax probabilities. In our experiments we consistently observe a clear additional gain in OoD detection performance, cutting down the number of detection errors by up to 52% when comparing the best baseline with our results. We achieve this improvement sacrificing only marginally in original segmentation performance. Therefore, our method contributes to safer DNNs with more reliable overall system performance.
翻译:用于图像语义分隔的深神经网络( DNN) 用于图像的语义分隔的深神经网络( DNN) 通常会经过训练, 在预先定义的封闭对象类中运行。 这与“ 开放世界” 设置的“ 开放世界” 相反。 从功能安全角度看, 检测所谓的“ 超出分布” (OoD) 样本的能力, 即 DNN 语义空间以外的物体, 对许多应用程序( 如自动驱动等) 至关重要。 OOD 探测的自然基线方法是在像素一样的软式软模组中设定阈值。 我们提出两步程序, 大大改进了该方法。 首先, 我们使用CO 数据集的样本样本作为OD 代理, 引入第二个培训目标, 最大限度地增加这些样本的软体积分隔网络, 我们重新将一些DNNNW 检测结果放在不同的分布数据集中, 持续地观察到OOD 改进性能通过完全脱钩的OD 。 其次, 我们用透明化的后序方法将OD 升级升级到正式的D 。