Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking
From MaRDI portal
Publication:5056972
DOI10.1080/00031305.2019.1699443OpenAlexW2994486096WikidataQ126627198 ScholiaQ126627198MaRDI QIDQ5056972
Publication date: 14 December 2022
Published in: The American Statistician (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00031305.2019.1699443
hypothesis testingmodel checkingobjective Bayes factorrelative belief ratio\(p\)-value calibrationreproducibility crisis
Related Items (6)
Publication Policies for Replicable Research and the Community-Wide False Discovery Rate ⋮ Null Hypothesis Significance Testing Interpreted and Calibrated by Estimating Probabilities of Sign Errors: A Bayes-Frequentist Continuum ⋮ Maximum entropy derived and generalized under idempotent probability to address Bayes-frequentist uncertainty and model revision uncertainty: an information-theoretic semantics for possibility theory ⋮ Statistical evidence and surprise unified under possibility theory ⋮ The \(p\)-value interpreted as the posterior probability of explaining the data: applications to multiple testing and to restricted parameter spaces ⋮ Fiducialize statistical significance: transformingp-values into conservative posterior probabilities and Bayes factors
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Error probabilities in default Bayesian hypothesis testing
- Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox
- The one-sided posterior predictive \(p\)-value for Fieller's problem
- Inference after checking multiple Bayesian models for data conflict and applications to mitigating the influence of rejected priors
- Decision making under uncertainty using imprecise probabilities
- Revised standards for statistical evidence
- A general framework for model-based statistics
- A STATISTICAL PARADOX
- A comment on D. V. Lindley's statistical paradox
- Reporting Bayes factors or probabilities to decision makers of unknown loss functions
- Confidence distributions and empirical Bayes posterior distributions unified as distributions of evidential support
- Correcting false discovery rates for their bias toward false positives
- Calibration ofρValues for Testing Precise Null Hypotheses
- A critical evaluation of the current “p‐value controversy”
- Post-Processing Posterior PredictivepValues
- The False Positive Risk: A Proposal Concerning What to Do About p-Values
- The ASA Statement on p-Values: Context, Process, and Purpose
This page was built for publication: Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking