Compared to point estimates calculated by standard neural networks, Bayesian neural networks (BNN) provide probability distributions over the output predictions and model parameters, i.e., the weights. Training the weight distribution of a BNN, however, is more involved due to the intractability of the underlying Bayesian inference problem and thus, requires efficient approximations. In this paper, we propose a novel approach for BNN learning via closed-form Bayesian inference. For this purpose, the calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems, where the weights are modeled as Gaussian random variables. This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent. We demonstrate our method on several UCI datasets and compare it to the state of the art.
翻译:与标准神经网络计算出的点估计相比,Bayesian神经网络(BNN)提供输出预测和模型参数的概率分布,即重量。然而,由于BNN的重量分布因巴伊西亚推论问题背后的可忽略性而更多涉及培训BNN的重量分布,因此需要有效的近似值。在本文中,我们提议一种新颖的办法,让BNN通过闭式Bayesian推论进行学习。为此,产出预测分布的计算和重量分布的更新被作为Bayesian过滤和平滑问题处理,其重量分布以Gausian随机变量为模型。这样可以采用封闭式的表达方式,以无梯度下降的顺序/在线方式对网络参数进行培训。我们在几个UCI数据集上展示了我们的方法,并将其与艺术状况进行比较。