This paper introduces a new neural network based prior for real valued functions on $\mathbb R^d$ which, by construction, is more easily and cheaply scaled up in the domain dimension $d$ compared to the usual Karhunen-Lo\`eve function space prior. The new prior is a Gaussian neural network prior, where each weight and bias has an independent Gaussian prior, but with the key difference that the variances decrease in the width of the network in such a way that the resulting function is almost surely well defined in the limit of an infinite width network. We show that in a Bayesian treatment of inferring unknown functions, the induced posterior over functions is amenable to Monte Carlo sampling using Hilbert space Markov chain Monte Carlo (MCMC) methods. This type of MCMC is popular, e.g. in the Bayesian Inverse Problems literature, because it is stable under mesh refinement, i.e. the acceptance probability does not shrink to $0$ as more parameters of the function's prior are introduced, even ad infinitum. In numerical examples we demonstrate these stated competitive advantages over other function space priors. We also implement examples in Bayesian Reinforcement Learning to automate tasks from data and demonstrate, for the first time, stability of MCMC to mesh refinement for these type of problems.
翻译:本文引入了一个新的神经网络, 其基础是真正价值在$\mathbb R ⁇ d$上的功能。 与以前通常的Karhunen- Loç ⁇ eve 功能空间相比, 通过建造, 更容易和廉价地在域内提升 $d$d$ 。 新的先期是一个高斯神经网络, 之前每个重量和偏向都有一个独立的Gaussian 之前的Gaussian 神经网络, 但关键区别在于网络宽度的差异缩小, 使得由此形成的功能几乎肯定在一个无限宽度网络的极限中被很好地界定。 我们显示, 在巴耶斯处理推断未知功能时, 诱导的后方函数比重比重更轻, 与之前的Karte Carlo 系统( Monte MC ) 系统( Monte Carlo MC ) 系统( MC MC ) 方法相比, 比重更小。 这种模式的 MMC 十分流行, 因为它在微缩中比较,,,, 也就是 接受概率不会减缩为0. 0美元,,, 因为先前的参数的参数的参数比重。