Controlling the error probabilities of model selection information criteria using bootstrapping
From MaRDI portal
Publication:5861435
DOI10.1080/02664763.2019.1701636OpenAlexW2994876975WikidataQ126544566 ScholiaQ126544566MaRDI QIDQ5861435
Michael Cullan, Scott Lidgard, Beckett Sterner
Publication date: 1 March 2022
Published in: Journal of Applied Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/02664763.2019.1701636
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
- The Model Confidence Set
- Akaike-type criteria and the reliability of inference: model selection versus statistical model specification
- To explain or to predict?
- Bootstrap methods: another look at the jackknife
- Assessing model mimicry using the parametric bootstrap.
- Weak convergence of smoothed and nonsmoothed bootstrap quantile estimates
- Performance Measures for Neyman–Pearson Classification
- The bootstrap: To smooth or not to smooth?
- Model Selection and Multimodel Inference
- IX. On the problem of the most efficient tests of statistical hypotheses
- Estimation and Accuracy After Model Selection
- Bridging AIC and BIC: A New Criterion for Autoregression
- Models and Statistical Inference: The Controversy between Fisher and Neyman–Pearson
- Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction
- Best-subset model selection based on multitudinal assessments of likelihood improvements
This page was built for publication: Controlling the error probabilities of model selection information criteria using bootstrapping