Data-driven priors and their posterior concentration rates
From MaRDI portal
Publication:2326047
DOI10.1214/19-EJS1600zbMath1429.62148arXiv1604.05734MaRDI QIDQ2326047
Ryan Martin, Stephen G. Walker
Publication date: 4 October 2019
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1604.05734
Nonparametric regression and quantile regression (62G08) Density estimation (62G07) Asymptotic distribution theory in statistics (62E20) Empirical decision procedures; empirical Bayes procedures (62C12)
Related Items
Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior, Unnamed Item, A comparison of learning rate selection methods in generalized Bayesian inference, Generalized fiducial factor: an alternative to the Bayes factor for forensic identification of source problems, Empirical priors and coverage of posterior credible sets in a sparse normal mean model, GAN-Based Priors for Quantifying Uncertainty in Supervised Learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Concentration rate and consistency of the posterior distribution for selected priors under monotonicity constraints
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Adaptive Bayesian estimation using a Gaussian random field with inverse gamma bandwidth
- Bayesian adaptation
- Posterior convergence rates of Dirichlet mixtures at smooth densities
- On rates of convergence for posterior distributions in infinite-dimensional models
- Convergence rates of posterior distributions for non iid observations
- Statistical decision theory and Bayesian analysis. 2nd ed
- The horseshoe+ estimator of ultra-sparse signals
- Uncertainty quantification for the horseshoe (with discussion)
- Convergence rates of posterior distributions.
- Rates of convergence of posterior distributions.
- Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities.
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Empirical Bayes scaling of Gaussian priors in the white noise model
- Adaptive Bayesian density estimation with location-scale mixtures
- Empirical Bayes oracle uncertainty quantification for regression
- Asymptotic behaviour of the empirical Bayes posteriors associated to maximum marginal likelihood estimator
- On coverage and local radial rates of credible sets
- Empirical priors and coverage of posterior credible sets in a sparse normal mean model
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Posterior concentration rates for empirical Bayes procedures with applications to Dirichlet process mixtures
- Adaptive posterior contraction rates for the horseshoe
- Estimation and variable selection with exponential weights
- On Bayesian Consistency
- Bayesian Optimal Adaptive Estimation Using a Sieve Prior
- Bayes and empirical Bayes: do they merge?
- Rényi Divergence and Kullback-Leibler Divergence
- Adaptive Bayesian Procedures Using Random Series Priors
- On Rates of Convergence for Bayesian Density Estimation
- The horseshoe estimator for sparse signals
- Generalized double Pareto shrinkage
- Local Posterior Concentration Rate for Multilevel Sparse Sequences
- Dirichlet–Laplace Priors for Optimal Shrinkage
- Fundamentals of Nonparametric Bayesian Inference
- Rate exact Bayesian adaptation with modified block priors