Predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed: an open-source benchmark for controlled and principled evaluation of agents that generate such predictions. Crucially, the testbed assesses agents not only on the quality of their marginal predictions per input, but also on their joint predictions across many inputs. We evaluate a range of agents using a simple neural network data generating process. Our results indicate that some popular Bayesian deep learning agents do not fare well with joint predictions, even when they can produce accurate marginal predictions. We also show that the quality of joint predictions drives performance in downstream decision tasks. We find these results are robust across choice a wide range of generative models, and highlight the practical importance of joint predictions to the community.
翻译:本文介绍《神经测试:对产生这种预测的物剂进行有控制和有原则的评估的公开来源基准》。 关键是,测试床评估物剂不仅对其每种投入的边际预测质量进行评估,而且对其在许多投入方面的联合预测进行评估。 我们使用简单的神经网络数据生成过程对一系列物剂进行评估。我们的结果表明,一些受欢迎的巴耶斯深层学习物剂对联合预测并不满意,即使它们能够产生准确的边际预测。 我们还表明,联合预测的质量能推动下游决策任务的业绩。我们发现,这些结果在选择广泛的基因模型方面是稳健的,并突出了联合预测对社区的实际重要性。