Checking how well a fitted model explains the data is one of the most fundamental parts of a Bayesian data analysis. However, existing model checking methods suffer from trade-offs between being well-calibrated, automated, and computationally efficient. To overcome these limitations, we propose split predictive checks (SPCs), which combine the ease-of-use and speed of posterior predictive checks with the good calibration properties of predictive checks that rely on model-specific derivations or inference schemes. We develop an asymptotic theory for two types of SPCs: single SPCs and the divided SPC. Our results demonstrate that they offer complementary strengths: single SPCs provide superior power in the small-data regime or when the misspecification is significant and divided SPCs provide superior power as the dataset size increases or when the form of misspecification is more subtle. We validate the finite-sample utility of SPCs through extensive simulation experiments in exponential family and hierarchical models, and provide four real-data examples where SPCs offer novel insights and additional flexibility beyond what is available when using posterior predictive checks.
翻译:为了克服这些限制,我们建议进行分开的预测检查(SPCs),这种检查将使用方便和速度的事后预测检查与依靠模型特定衍生或推断方法的预测检查的良好校准特性结合起来。我们为两种SPCs(单一的SPCs和分裂的SPCs)开发了一种无药可治理论。我们的结果表明,它们具有互补的优势:单一的SPCs在小数据系统中提供超强的权力,或者当错误的区分是显著的时,或者当数据设置大小增加时,或者当错误区分的形式更加微妙时,单一的SPCs提供超强的权力。我们通过指数式家庭和等级模型的广泛模拟实验来验证SPCs的有限抽样效用,并提供四种真实数据实例,使SPCs提供新颖的洞察力和额外的灵活性,在使用后级预测检查时超出现有的能力。