Statistical tests for comparing possibly misspecified and nonnested models
From MaRDI portal
Publication:1977909
DOI10.1006/jmps.1999.1281zbMath1048.62503OpenAlexW2026875721WikidataQ52080621 ScholiaQ52080621MaRDI QIDQ1977909
Publication date: 2000
Published in: Journal of Mathematical Psychology (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/ca0420b7028990c07bbe13ccaa5811768525cafc
Related Items
A tutorial on Fisher information, Key concepts in model selection: Performance and generalizability, Discrepancy risk model selection test theory for comparing possibly misspecified or nonnested models, A comparison of models for learning how to dynamically integrate multiple cues in order to forecast continuous criteria, Bayes factors: Prior sensitivity and model generalizability
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
- Model selection and Akaike's information criterion (AIC): The general theory and its analytical extensions
- Estimating the dimension of a model
- Assessing the error probability of the model selection test
- Making correct statistical inferences using a wrong probability model
- Information criteria for selecting possibly misspecified parametric models
- How to assess a model's testability and identifiability
- An introduction to model selection
- Akaike's information criterion and recent developments in information complexity
- The importance of complexity in model selection
- Key concepts in model selection: Performance and generalizability
- Discrepancy risk model selection test theory for comparing possibly misspecified or nonnested models
- Comparing Non-Nested Linear Models
- A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion
- The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses
- Maximum Likelihood Estimation of Misspecified Models