Bayesian structure learning allows inferring Bayesian network structure from data while reasoning about the epistemic uncertainty -- a key element towards enabling active causal discovery and designing interventions in real world systems. In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation. Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions and allows for joint posterior inference of both the graph structure and the conditional distribution parameters. This makes our formulation directly applicable to posterior inference of complex Bayesian network models, e.g., with nonlinear dependencies encoded by neural networks. Using DiBS, we devise an efficient, general purpose variational inference method for approximating distributions over structural models. In evaluations on simulated and real-world data, our method significantly outperforms related approaches to joint posterior inference.
翻译:贝叶斯结构的学习可以从数据中推断出贝叶西亚网络结构,同时推理成瘾的不确定性 -- -- 这是在现实世界系统中促成积极因果发现和设计干预的一个关键要素。在这项工作中,我们为巴伊西亚结构学习提出了一个全面、完全不同的框架,在潜在概率图示的连续空间中运作。与现有工作相反,DiBS对当地有条件分布的形式具有不可知性,并允许对图形结构和有条件分布参数进行共同的事后推断。这使我们的配方直接适用于复杂的巴伊西亚网络模型的事后推断,例如由神经网络编码的非线性依赖性。我们利用DiBS设计了一种高效、通用目的的可变推导法,用于结构模型的近似性分布。在对模拟数据和实际世界数据的评价中,我们的方法大大超出了与联合后推法相关的方法。