The last decade witnessed a growing interest in Bayesian learning. Yet, the technicality of the topic and the multitude of ingredients involved therein, besides the complexity of turning theory into practical implementations, limit the use of the Bayesian learning paradigm, preventing its widespread adoption across different fields and applications. This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks. It provides an introduction to the topic from an accessible, practical-algorithmic perspective. Upon providing a general introduction to Bayesian Neural Networks, we discuss and present both standard and recent approaches for Bayesian inference, with an emphasis on solutions relying on Variational Inference and the use of Natural gradients. We also discuss the use of manifold optimization as a state-of-the-art approach to Bayesian learning. We examine the characteristic properties of all the discussed methods, and provide pseudo-codes for their implementation, paying attention to practical aspects, such as the computation of the gradients.
翻译:在过去的十年中,人们越来越关心贝叶斯人的学习,然而,这个专题的技术性质和所涉及的众多要素,除了将理论转化为实际执行的复杂性外,还限制了拜伊斯人的学习范式的使用,防止其在不同领域和应用中被广泛采用。这一自成一体的调查使读者们了解并介绍了巴伊斯人学习神经网络的原则和算法,从无障碍、实用和横向的角度介绍了这个专题。我们在向巴伊西亚神经网络提供一般性介绍之后,讨论并提出了巴伊西亚人的标准和最新推论方法,重点是依赖变化推论和使用自然梯度的解决方案。我们还讨论了如何利用多种优化作为贝伊斯人学习的最先进方法。我们从所有讨论过的各种方法的特性,并提供实施这些方法的假代码,同时注意实际方面,例如梯度的计算。我们还讨论了如何利用多种优化作为贝伊斯人学习的最先进方法。我们审视了所有讨论过的各种方法的特性,并提供了实施这些方法的假代码,同时注意各种实际方面,例如梯度的计算。