Neural Linear Models (NLM) are deep Bayesian models that produce predictive uncertainties by learning features from the data and then performing Bayesian linear regression over these features. Despite their popularity, few works have focused on methodically evaluating the predictive uncertainties of these models. In this work, we demonstrate that traditional training procedures for NLMs drastically underestimate uncertainty on out-of-distribution inputs, and that they therefore cannot be naively deployed in risk-sensitive applications. We identify the underlying reasons for this behavior and propose a novel training framework that captures useful predictive uncertainties for downstream tasks.
翻译:神经线性模型(NLM)是巴伊西亚深度模型,这些模型通过从数据中学习特征而产生预测性不确定性,然后对这些特征进行巴伊西亚线性回归。尽管这些模型受到欢迎,但很少有工作侧重于有条不紊地评估这些模型的预测性不确定性。在这项工作中,我们证明,NLMS的传统培训程序大大低估了分配外投入的不确定性,因此不能天真地应用于风险敏感应用中。我们找出这种行为的根本原因,并提出一个新的培训框架,为下游任务提供有用的预测性不确定性。