Monte Carlo (MC) integration is the de facto method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration tend to be more expensive or biased. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple \emph{post hoc} method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
翻译:蒙特卡洛( Monte Carlo ( Monte, Monte, Monte Carlo ( Monte, Monte ) 整合是接近于Bayesian神经网络预测分布的实际方法。 但是,即使有许多 MMC 样本,基于Gaussian BNN 的BNN 仍可能由于后近光线错误而产生不良的预测性能。 同时, Monte Carlo ( Monte, Monte, Monte Carlo ) 整合的替代方法往往更昂贵或偏差。 在这项工作中,我们实验性地表明,良好的 MMC 近光谱预测性分布的关键在于近光谱本身的质量。 但是, 先前获得准确的后近光线近光线的方法是昂贵的,而且非三边际的。 因此, 我们提议将Gaussian 近光谱的近光谱的后光线与正常的汉密尔顿· 蒙特卡洛 相比都是有竞争力的。