Recent work has shown that the prior over functions induced by a deep Bayesian neural network (BNN) behaves as a Gaussian process (GP) as the width of all layers becomes large. However, many BNN applications are concerned with the BNN function space posterior. While some empirical evidence of the posterior convergence was provided in the original works of Neal (1996) and Matthews et al. (2018), it is limited to small datasets or architectures due to the notorious difficulty of obtaining and verifying exactness of BNN posterior approximations. We provide the missing theoretical proof that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior. For empirical validation, we show how to generate exact samples from a finite BNN on a small dataset via rejection sampling.
翻译:最近的工作表明,随着所有层宽度的扩大,由深层Bayesian神经网络(BNN)诱导的先前超常功能作为Gossian进程(GP)的表现,随着所有层宽度的扩大,BNN的很多应用都与BNN功能空间的后方空间有关,虽然Neal(1996年)和Matthews等人(2018年)的原始著作中提供了一些关于后端趋同的经验证据,但由于难以获得和核实BNN的后端近似的准确性,它仅限于小数据集或结构。我们提供了缺失的理论证据,证明BNN 远端近似点的确切点(weakly)与前端GP极限所引的相交汇点(weakly)是(weakly)的。关于经验验证,我们展示如何通过拒绝抽样从一个小数据集中从一个有限的BNN站产生精确样品。