A set of novel approaches for estimating epistemic uncertainty in deep neural networks with a single forward pass has recently emerged as a valid alternative to Bayesian Neural Networks. On the premise of informative representations, these deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution (OOD) data while adding negligible computational costs at inference time. However, it remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications - both prerequisites for their practical deployment. To this end, we first provide a taxonomy of DUMs, evaluate their calibration under continuous distributional shifts and their performance on OOD detection for image classification tasks. Then, we extend the most promising approaches to semantic segmentation. We find that, while DUMs scale to realistic vision tasks and perform well on OOD detection, the practicality of current methods is undermined by poor calibration under realistic distributional shifts.
翻译:最近出现了一套新颖的方法来估计深海神经网络中具有单一前方传票的隐含性不确定现象,作为Bayesian神经网络的有效替代物。在信息说明的前提下,这些确定性不确定性方法在发现分配外(OOOD)数据的同时,在推论时间增加了微不足道的计算成本,在检测计算成本方面表现良好。然而,仍然不清楚DUMs是否经过适当校准,并且能够无缝地适用于现实世界应用,两者都是实际部署的先决条件。为此,我们首先提供DUMs的分类,评估在连续分配变换下的校准及其在图像分类任务中OOD检测的性能。然后,我们将最有希望的方法推广到语义分割。我们发现,虽然DUMs的规模与现实的愿景任务相比,并在OODD检测上表现良好,但现行方法的实际性却由于在现实分配变换下的校准差而受到损害。