Bayesian inference has widely acknowledged advantages in many problems, but it can also be unreliable if the model is misspecified. Bayesian modular inference is concerned with inference in complex models which have been specified through a collection of coupled sub-models. The sub-models are called modules in the literature, and they often arise from modeling different data sources, or from combining domain knowledge from different disciplines. When some modules are misspecified, cutting feedback is a widely used Bayesian modular inference method which ensures that information from suspect model components is not used in making inferences about parameters in correctly specified modules. However, in general settings it is difficult to decide when this ``cut posterior'' is preferable to the exact posterior. When misspecification is not severe, cutting feedback may increase the uncertainty in Bayesian posterior inference greatly without reducing estimation bias substantially. This motivates semi-modular inference methods, which avoid the binary cut of cutting feedback approaches. In this work, using a local model misspecification framework, we provide the first precise formulation of the the bias-variance trade-off that has motivated the literature on semi-modular inference. We then implement a mixture-based semi-modular inference approach, demonstrating theoretically that it delivers inferences that are more accurate, in terms of a user-defined loss function, than if either the cut or full posterior were used by themselves. The new method is demonstrated in a number of applications.
翻译:暂无翻译