The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topic and the multitude of ingredients involved therein, besides the complexity of turning theory into practical implementations, limit the use of the Bayesian learning paradigm, preventing its widespread adoption across different fields and applications. This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks. It provides an introduction to the topic from an accessible, practical-algorithmic perspective. Upon providing a general introduction to Bayesian Neural Networks, we discuss and present both standard and recent approaches for Bayesian inference, with an emphasis on solutions relying on Variational Inference and the use of Natural gradients. We also discuss the use of manifold optimization as a state-of-the-art approach to Bayesian learning. We examine the characteristic properties of all the discussed methods, and provide pseudo-codes for their implementation, paying attention to practical aspects, such as the computation of the gradients
翻译:在过去的十年中,人们越来越关心巴耶斯人的学习。然而,这个专题的技术性及其所涉及的众多内容,除了将理论转化为实际实施的复杂性外,还限制了巴伊西亚学习范式的使用,防止其在不同领域和各种应用中被广泛采用。这一自成一体的调查使读者参与并介绍了巴伊西亚人学习神经网络的原则和算法,从无障碍、实用和实用的统计角度介绍了这个专题。我们在向巴伊西亚神经网络提供一般性介绍之后,讨论并提出了巴伊西亚人的标准和最新推论方法,重点是依靠变法推推理和使用自然梯度的解决方案。我们还讨论了如何使用多种优化作为巴伊西亚人学习的最先进方法。我们从无障碍、实用和实用的角度审视所有讨论过的方法的特性,并提供实施这些方法的假代码,同时注意实际方面,例如计算梯度。我们还要讨论如何利用多种优化作为贝伊斯人学习的最先进方法。我们审视所有讨论所有讨论过的方法的特性,并为这些方法的实施提供假代码,同时注意实际方面,例如计算梯度等。