Posterior predictive p-values (ppps) have become popular tools for Bayesian model criticism, being general-purpose and easy to use. However, their interpretation can be difficult because their distribution is not uniform under the hypothesis that the model did generate the data. To address this issue, procedures to obtain calibrated ppps (cppps) have been proposed although not used in practice, because they require repeated simulation of new data and model estimation via MCMC. Here we give methods to balance the computational trade-off between the number of calibration replicates and the number of MCMC samples per replicate. Our results suggest that investing in a large number of calibration replicates while using short MCMC chains can save significant computation time compared to naive implementations, without significant loss in accuracy. We propose different estimators for the variance of the cppp that can be used to confirm quickly when the model fits the data well. Variance estimation requires the effective sample sizes of many short MCMC chains; we show that these can be well approximated using the single long MCMC chain from the real-data model. The procedure for cppp is implemented in NIMBLE, a flexible framework for hierarchical modeling that supports many models and discrepancy measures.
翻译:暂无翻译