Standard Bayesian inference can build models that combine information from various sources, but this inference may not be reliable if components of a model are misspecified. Cut inference, as a particular type of modularized Bayesian inference, is an alternative which splits a model into modules and cuts the feedback from the suspect module. Previous studies have focused on a two-module case, but a more general definition of a ``module'' remains unclear. We present a formal definition of a ``module'' and discuss its properties. We formulate methods for identifying modules; determining the order of modules; and building the cut distribution that should be used for cut inference within an arbitrary directed acyclic graph structure. We justify the cut distribution by showing that it not only cuts the feedback but also is the best approximation satisfying this condition to the joint distribution in the Kullback-Leibler divergence. We also extend cut inference for the two-module case to a general multiple-module case via a sequential splitting technique and demonstrate this via illustrative applications.
翻译:标准贝叶斯推论可以建立将不同来源的信息综合起来的模型,但是如果模型的部件被错误地描述,这种推论可能不可靠。作为特定类型的模块化贝叶斯推论的切除推论,是一种将模型分成模块和切断从可疑模块反馈的替代方法。以前的研究侧重于一个双模块案例,但对于“模块”的更一般性的定义仍然不明确。我们提出了一个“模块”的正式定义,并讨论其属性。我们为确定模块、确定模块的顺序以及建立应当用于在任意定向环流图结构中进行切除推断的切除分布而制定方法。我们证明这一削减分布是合理的,它不仅削减了反馈,而且是最能满足库尔贝克-利伯尔差异中共同分布条件的近似性。我们还通过顺序分解技术将两个模块案例的切分解扩大到一个普通的多模块案例,并通过说明性应用来证明这一点。