A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data. But far away from them, ReLU Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be asymptotically overconfident. This issue arises since the output variance of a BNN with finitely many features is quadratic in the distance from the data region. Meanwhile, Bayesian linear models with ReLU features converge, in the infinite-width limit, to a particular Gaussian process (GP) with a variance that grows cubically so that no asymptotic overconfidence can occur. While this may seem of mostly theoretical interest, in this work, we show that it can be used concretely to the benefit of BNNs. We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data. Although the resulting model approximates a full GP posterior, thanks to its structure, it can be applied post-hoc to any pre-trained ReLU BNN at a low cost.
翻译:贝叶斯治疗可以减轻雷劳网在培训数据周围的过度自信。 但距离培训网很远的地方,雷卢巴耶斯神经网络(BNNS)仍然可以低估不确定性,从而容易地过于自信。 这个问题的出现是因为一个有有限许多特征的BNN的输出差异在数据区域之外是四级的。 同时, 带有雷卢特特征的巴耶斯线性模型在无限宽度的限度内, 聚集到一个特殊的高萨进程(GP), 其差异在相距不远的地方增长, 从而不会出现无症状的过度自信。 虽然在这项工作中, 似乎大多具有理论上的兴趣, 我们显示它可以具体地用于BNNIS的利益。 我们通过GP 扩展了具有无限ReLU特性的有限 ReLU BNNN, 其产生的模型在离数据远处, 与BNW的预测力不受影响, 。 尽管由此产生的模型几乎接近一个完整的GP 后期, 其成本很低, 可以在BNNNP 之前应用任何成本低的模型。