We propose a framework for predictive uncertainty quantification of a neural network that replaces the conventional Bayesian notion of weight probability density function (PDF) with a physics based potential field representation of the model weights in a Gaussian reproducing kernel Hilbert space (RKHS) embedding. This allows us to use perturbation theory from quantum physics to formulate a moment decomposition problem over the model weight-output relationship. The extracted moments reveal successive degrees of regularization of the weight PDF around the local neighborhood of the model output. Such localized moments determine with great sensitivity the local heterogeneity of the weight PDF around a model prediction thereby providing significantly greater accuracy of model predictive uncertainty than the central moments characterized by Bayesian and ensemble methods. We show that this consequently leads to a better ability to detect false model predictions of test data that has undergone a covariate shift away from the training PDF learned by the model. We evaluate our approach against baseline uncertainty quantification methods on several benchmark datasets that are corrupted using common distortion techniques. Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
翻译:我们提出一个神经网络预测不确定性量化框架,以基于物理的潜在外表表示模型重量,取代传统的巴伊西亚体重概率密度功能概念(PDF),在高森复制核心Hilbert空间(RKHS)嵌入时,以物理为基础的模型重量表层。这使我们能够使用量子物理的扰动理论,以针对模型重量-输出关系形成一个瞬间分解问题。提取的瞬间显示重力PDF在模型输出的当地周围连续调整程度。这种局部时段非常敏感地决定了重力PDF围绕模型预测的局部异质性,从而大大提高了模型预测不确定性的准确性,超过了Bayesian和共构件方法所描述的中心时段。我们因此表明,这导致更有能力检测试验数据的错误模型预测,该模型已经偏离了从模型所学的培训 PDF 。我们对照一些基准数据集的基线不确定性量化方法评估了我们的方法,这些基准数据集是使用常见的扭曲技术腐败的。我们的方法提供了快速模型预测不确定性的估计数,其精确度和校准性要大得多。