Bayesian neural networks (BNNs) have shown success in the areas of uncertainty estimation and robustness. However, a crucial challenge prohibits their use in practice: Bayesian NNs require a large number of predictions to produce reliable results, leading to a significant increase in computational cost. To alleviate this issue, we propose spatial smoothing, a method that ensembles neighboring feature map points of CNNs. By simply adding a few blur layers to the models, we empirically show that the spatial smoothing improves accuracy, uncertainty estimation, and robustness of BNNs across a whole range of ensemble sizes. In particular, BNNs incorporating the spatial smoothing achieve high predictive performance merely with a handful of ensembles. Moreover, this method also can be applied to canonical deterministic neural networks to improve the performances. A number of evidences suggest that the improvements can be attributed to the smoothing and flattening of the loss landscape. In addition, we provide a fundamental explanation for prior works - namely, global average pooling, pre-activation, and ReLU6 - by addressing to them as special cases of the spatial smoothing. These not only enhance accuracy, but also improve uncertainty estimation and robustness by making the loss landscape smoother in the same manner as the spatial smoothing. The code is available at https://github.com/xxxnell/spatial-smoothing.
翻译:贝亚神经网络(BNNs)在不确定性估算和稳健性领域表现出了成功。然而,一个关键的挑战是禁止在实践中使用这些网络:巴伊西亚神经网络需要大量的预测才能产生可靠的结果,从而导致计算成本的大幅上升。为了缓解这一问题,我们建议空间平滑,这种方法将CNN的相邻地貌地图点混为一体。我们简单地在模型中添加一些模糊的层块,从经验上表明,空间平滑提高了BNNs的准确性、不确定性估算和稳健性。特别是,Besian NNs需要大量预测才能产生可靠的结果,从而导致仅仅通过少量的组合实现高度的预测性性业绩。此外,这种方法还可以应用于Canonicic 确定性神经网络来改善性能。一些证据表明,这些改进可以归因于损失地貌的平滑和稳定化。此外,我们为先前的工作提供了基本的解释,即全球平均集成、活动前期和REU6,通过平滑性地平滑性地平滑性地平滑性地平滑性地测量,通过平滑的方式提高这些空间/平滑性能。