We propose a novel method for closed-form predictive distribution modeling with neural nets. In quantifying prediction uncertainty, we build on Evidential Deep Learning, which has been impactful as being both simple to implement and giving closed-form access to predictive uncertainty. We employ it to model aleatoric uncertainty and extend it to account also for epistemic uncertainty by converting it to a Bayesian Neural Net. While extending its uncertainty quantification capabilities, we maintain its analytically accessible predictive distribution model by performing progressive moment matching for the first time for approximate weight marginalization. The eventual model introduces a prohibitively large number of hyperparameters for stable training. We overcome this drawback by deriving a vacuous PAC bound that comprises the marginal likelihood of the predictor and a complexity penalty. We observe on regression, classification, and out-of-domain detection benchmarks that our method improves model fit and uncertainty quantification.
翻译:我们提出了一种以神经网进行封闭式预测分布模型的新颖方法。在量化预测不确定性时,我们以 " 证据深度学习 " 为基础,该方法既简单易行,又易于预测不确定性;我们使用它来模拟偏移不确定性,并通过将其转换成贝叶斯神经网,将它也用于认知性不确定性。在扩大其不确定性量化能力的同时,我们保持其分析上可及的预测分布模型,首次对大约体重边缘化进行渐进式比对。最终模型为稳定培训引入了数量惊人的大量超常参数。我们克服了这一缺陷,得出了由预测和复杂处罚的边缘可能性构成的真空PAC界限。我们观察了回归、分类和外部检测基准,我们的方法改进了模型和不确定性量化。