Bayesian structure learning allows inferring Bayesian network structure from data while reasoning about the epistemic uncertainty -- a key element towards enabling active causal discovery and designing interventions in real world systems. In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation. Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions and allows for joint posterior inference of both the graph structure and the conditional distribution parameters. This makes DiBS directly applicable to posterior inference of nonstandard Bayesian network models, e.g., with nonlinear dependencies encoded by neural networks. Building on recent advances in variational inference, we use DiBS to devise an efficient general purpose method for approximating posteriors over structural models. In evaluations on simulated and real-world data, our method significantly outperforms related approaches to joint posterior inference.
翻译:Bayesian结构的学习可以从数据中推断出Bayesian网络结构结构,同时推理成瘾的不确定性 -- -- 这是在现实世界系统中促成积极因果发现和设计干预的一个关键要素。在这项工作中,我们为Bayesian结构学习提出了一个全面、完全不同的框架(DiBS),该框架在潜在概率图示的连续空间中运作。与现有工作相反,DiBS对当地有条件分布的形式具有不可知性,并允许对图形结构和有条件分布参数进行共同的事后推论。这使得DiBS直接适用于非标准的Bayesian网络模型的事后推论,例如由神经网络编码的非线性依赖性模型。根据最近的变化推论的进展,我们使用DiBS来设计一种有效的通用方法,在结构模型上对后背体进行相近的适应。在对模拟和真实世界数据的评估中,我们的方法大大超出与联合后背体推断有关的方法。