We show that a single softmax neural net with minimal changes can beat the uncertainty predictions of Deep Ensembles and other more complex single-forward-pass uncertainty approaches. Standard softmax neural nets suffer from feature collapse and extrapolate arbitrarily for OoD points. This results in arbitrary softmax entropies for OoD points which can have high entropy, low, or anything in between, thus cannot capture epistemic uncertainty reliably. We prove that this failure lies at the core of "why" Deep Ensemble Uncertainty works well. Instead of using softmax entropy, we show that with appropriate inductive biases softmax neural nets trained with maximum likelihood reliably capture epistemic uncertainty through their feature-space density. This density is obtained using simple Gaussian Discriminant Analysis, but it cannot represent aleatoric uncertainty reliably. We show that it is necessary to combine feature-space density with softmax entropy to disentangle uncertainties well. We evaluate the epistemic uncertainty quality on active learning and OoD detection, achieving SOTA ~98 AUROC on CIFAR-10 vs SVHN without fine-tuning on OoD data.
翻译:我们显示,单软成模神经网只要有最小的改变,就能够击退深团群和其他更复杂的单向前方的不确定性方法的不确定性预测。标准软模神经网因功能崩溃和任意推断 OOD 点的任意外推而出现。这导致OOD 点的任意软式成轴寄式偏偏偏偏,这些点具有很高的对温率、低或任何介于高摄性、低或任何介于高温或低的偏差,因此无法可靠地捕取感性不确定性。我们证明,这一失败是“为什么”深相联的深相联不易的不确定性。我们证明,这种失败是“为什么”深相联深相联的深相异性工作的核心。我们不是使用软max 温和OOD检测,经过最有可能可靠可靠可靠地捕捉到缩不确定性的成缩后,测试了适当的导导导导导导导导导导导导导导的软导导导导导导导导导导导的软导导神经质量。我们证明有必要将地将地空间空间空间密度与软导导导的酶,我们评估积极学习和OD检测,实现STAFIRM AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL AL