This paper introduces a new neural network based prior for real valued functions on $\mathbb R^d$ which, by construction, is more easily and cheaply scaled up in the domain dimension $d$ compared to the usual Karhunen-Lo\`eve function space prior. The new prior is a Gaussian neural network prior, where each weight and bias has an independent Gaussian prior, but with the key difference that the variances decrease in the width of the network in such a way that the resulting function is \emph{almost surely} well defined in the limit of an infinite width network. We show that in a Bayesian treatment of inferring unknown functions, the induced posterior over functions is amenable to Monte Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods. This type of MCMC is popular, e.g. in the Bayesian Inverse Problems literature, because it is stable under \emph{mesh refinement}, i.e. the acceptance probability does not shrink to $0$ as more parameters of the function's prior are introduced, even \emph{ad infinitum}. In numerical examples we demonstrate these stated competitive advantages over other function space priors. We also implement examples in Bayesian Reinforcement Learning to automate tasks from data and demonstrate, for the first time, stability of MCMC to mesh refinement for these type of problems.
翻译:本文引入了一个新的神经网络, 其基础是真实价值在$\mathbb R ⁇ d$上, 以真实价值在 $\mathb R ⁇ d$ 上的功能为基础, 与以前通常的 Karhunen- Loç ⁇ eve 功能空间相比, 新建的神经网络是之前的 高森神经网络, 每个重量和偏向都具有独立的 Gaussian 之前的 Gaussian 神经网络, 但关键区别在于网络宽度的差异缩小, 导致的功能在无限宽度网络的极限中被明确定义为\ emph{ mesty} 。 我们显示, 在巴耶西亚州对未知功能的推论中, 诱导的外延功能可以由蒙特卡洛使用 Hilbert space Markov 连锁 Monte Carlo (MC ) 的方法取样。 这种类型的 MMC 十分流行, 例如, 在巴耶西亚州反问题文献中, 因为它在 memph{me 精细化 } 中保持稳定, 接受概率不会降低为$0, 作为无限宽度网络功能中更多的参数参数参数参数 。