The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
翻译:在Bayesian神经网络(BNN)上,海面上的海边线参数是极高的,而且不是孔径的。在计算上,研究人员使用诸如平均场变异性推断或高压的Markov链 Monte Carlo(SGMC SGMC)等廉价的小型小批量方法,如平均场变异性推断或高压的Markov 链 Monte Carlo(SGMC ) 来接近这个海面。为了调查巴伊西亚深层学习中的基本问题,我们反而在现代建筑上使用全尺寸的Hamiltonian Monte Carlo(HMC ) 。我们显示(1) 邦网在标准级的测试和深层组合中可以取得显著的绩效增益;(2) 单长的HMC 链可以提供与多个更短链的相近相比的场外图;(3) 与最近的研究相比,我们发现,近场外色色色的海光线网络可以提供比高的更近的分布。