Calibrated Model Criticism Using Split Predictive Checks

From MaRDI portal
Publication:6395086

arXiv2203.15897MaRDI QIDQ6395086

Author name not available (Why is that?)

Publication date: 29 March 2022

Abstract: Checking how well a fitted model explains the data is one of the most fundamental parts of a Bayesian data analysis. However, existing model checking methods suffer from trade-offs between being well-calibrated, automated, and computationally efficient. To overcome these limitations, we propose split predictive checks (SPCs), which combine the ease-of-use and speed of posterior predictive checks with the good calibration properties of predictive checks that rely on model-specific derivations or inference schemes. We develop an asymptotic theory for two types of SPCs: single SPCs and the divided SPC. Our results demonstrate that they offer complementary strengths: single SPCs provide superior power in the small-data regime or when the misspecification is significant and divided SPCs provide superior power as the dataset size increases or when the form of misspecification is more subtle. We validate the finite-sample utility of SPCs through extensive simulation experiments in exponential family and hierarchical models, and provide four real-data examples where SPCs offer novel insights and additional flexibility beyond what is available when using posterior predictive checks.




Has companion code repository: https://github.com/tarps-group/split-predictive-checks








This page was built for publication: Calibrated Model Criticism Using Split Predictive Checks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6395086)