To adopt neural networks in safety critical domains, knowing whether we can trust their predictions is crucial. Bayesian neural networks (BNNs) provide uncertainty estimates by averaging predictions with respect to the posterior weight distribution. Variational inference methods for BNNs approximate the intractable weight posterior with a tractable distribution, yet mostly rely on sampling from the variational distribution during training and inference. Recent sampling-free approaches offer an alternative, but incur a significant parameter overhead. We here propose a more efficient parameterization of the posterior approximation for sampling-free variational inference that relies on the distribution induced by multiplicative Gaussian activation noise. This allows us to combine parameter efficiency with the benefits of sampling-free variational inference. Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks including ImageNet.
翻译:在安全关键领域采用神经网络,知道我们能否相信它们的预测至关重要。 贝叶西亚神经网络通过平均预测后重分布提供不确定性估计。 Biesian 神经网络(BNNS)通过平均预测后重分布提供不确定性估计。 BNNs 的变式推论方法接近棘手的重量后半场分布,但大多依赖培训和推论期间差异分布的抽样,但最近的无采样方法提供了一种替代方法,但产生了重要的参数管理。 我们在此建议,根据多复制性高斯活化噪音所引发的分布,对无采样变异推论的后近近似进行更有效的参数化参数化,这使我们能够将参数效率与无采样变异推论的好处结合起来。 我们的方法为标准回归问题和尺度带来竞争性结果,有利于大规模图像网络等图像分类任务。