While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs.
翻译:虽然巴伊西亚神经网络(BNNs)具有具有灵活、经充分校准的统计模型的希望,但推断往往需要近似值,其后果不易理解。我们研究了贝伊西亚预测分布中常见的变异方法的质量。对于单层隐蔽层 ReLU BNS 来说,我们证明,在重量空间中定义的两种最常用分布分布的功能空间(平均场高山和蒙特卡洛辍学)的功能空间存在基本限制。我们发现,在两种方法都无法在高度分离的低不确定性区域之间大幅度增加不确定性的简单案例。我们提供了有力的实证证据,准确的推论没有这种病理,因此是近似而不是模型造成的。相比之下,对于深层网络,我们证明存在着一种普遍性结果,表明上述类别中存在近似近似近地点,提供了灵活的不确定性估计。然而,我们从经验中发现,在进行更深的网络变化推论时,与单一层相近似形态的病理可能继续存在。我们的结果促使人们仔细考虑BNUS 方法的近似影响。