Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces \textit{The Neural Testbed}, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper.
翻译:可见的预测分布往往对有用的不确定性量化至关重要,但被巴伊西亚深层学习界大都忽略了。我们用神经网络数据生成过程来衡量不确定性的估计方法。我们的结果显示评估的重要性,超越了边缘预测。此外,它们调和了实地的混乱来源,例如为什么巴伊西亚人的深度学习方法产生准确的边际预测在连续决策任务中效果不佳,纳入先前的预测如何有用,在评估业绩时,我们打开了神经测试床,包括本文的所有执行。