A set of novel approaches for estimating epistemic uncertainty in deep neural networks with a single forward pass has recently emerged as a valid alternative to Bayesian Neural Networks. On the premise of informative representations, these deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution (OOD) data while adding negligible computational costs at inference time. However, it remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications - both prerequisites for their practical deployment. To this end, we first provide a taxonomy of DUMs, and evaluate their calibration under continuous distributional shifts. Then, we extend them to semantic segmentation. We find that, while DUMs scale to realistic vision tasks and perform well on OOD detection, the practicality of current methods is undermined by poor calibration under distributional shifts.
翻译:最近出现了一套新颖的方法来估计深海神经网络中具有单一前方传球的隐含不确定性,作为Bayesian神经网络的有效替代物。根据信息说明的前提,这些确定性不确定性方法在发现分配外数据方面表现良好,同时在推论时间增加了可忽略不计的计算成本。然而,仍然不清楚DUMs是否经过适当校准,并且能够无缝地适用于现实世界应用,两者都是实际部署的先决条件。为此,我们首先提供DUMs分类,并评估在连续分配变换情况下的校准。然后,我们将其推广到语义分割。我们发现,虽然DUMs的规模扩大到现实的视觉任务,并在OOD检测上表现良好,但目前方法的实际性却因分配变换的校准不良而受到损害。