We present a Bayesian treatment for deep regression using an Errors-in-Variables model which accounts for the uncertainty associated with the input to the employed neural network. It is shown how the treatment can be combined with already existing approaches for uncertainty quantification that are based on variational inference. Our approach yields a decomposition of the predictive uncertainty into an aleatoric and epistemic part that is more complete and, in many cases, more consistent from a statistical perspective. We illustrate and discuss the approach along various toy and real world examples.
翻译:我们使用一个“误差变数”模型,对深度回归进行贝叶斯式的处理,该模型说明与对使用神经网络的投入有关的不确定性,说明如何将这种处理与基于变异推断的现有不确定性量化方法相结合,我们的方法将预测不确定性分解成一个更完整、在许多情况下从统计角度更加一致的偏移和认知部分。我们用各种玩具和真实世界的例子来说明和讨论这一方法。