Active inference is a state-of-the-art framework in neuroscience that offers a unified theory of brain function. It is also proposed as a framework for planning in AI. Unfortunately, the complex mathematics required to create new models -- can impede application of active inference in neuroscience and AI research. This paper addresses this problem by providing a complete mathematical treatment of the active inference framework -- in discrete time and state spaces -- and the derivation of the update equations for any new model. We leverage the theoretical connection between active inference and variational message passing as describe by John Winn and Christopher M. Bishop in 2005. Since, variational message passing is a well-defined methodology for deriving Bayesian belief update equations, this paper opens the door to advanced generative models for active inference. We show that using a fully factorized variational distribution simplifies the expected free energy -- that furnishes priors over policies -- so that agents seek unambiguous states. Finally, we consider future extensions that support deep tree searches for sequential policy optimisation -- based upon structure learning and belief propagation.
翻译:主动推断是神经科学中最先进的框架,它提供了统一的大脑功能理论。它也是作为AI中规划框架提出的。不幸的是,创建新模型所需的复杂数学会阻碍神经科学和AI研究中的积极推断的应用。本文件通过对活跃推断框架 -- -- 在离散的时间和状态空间 -- -- 以及任何新模型的最新方程式的衍生提供完整的数学处理方法来解决这一问题。我们利用了2005年约翰·温恩和克里斯托弗·M.毕晓普描述的积极推断和传递的变异信息之间的理论联系。由于变异信息传递是生成巴耶斯信仰更新方程式的明确界定的方法,本文打开了先进的基因化模型以进行积极推断的大门。我们表明,使用充分因素化的变异分布将预期的自由能源(提供先于政策的能源)简单化,因此代理商寻求明确的状态。最后,我们考虑未来扩展,支持在结构学习和信仰传播的基础上进行深层次的政策优化的树木搜索。