Empirical priors and posterior concentration in a piecewise polynomial sequence model
From MaRDI portal
Publication:6671915
DOI10.5705/ss.202021.0335MaRDI QIDQ6671915
Weining Shen, Ryan M. Martin, Chang Liu
Publication date: 27 January 2025
Published in: Statistica Sinica (Search for Journal in Brave)
Bayesian estimationstructure learningchange-point detectionhigh dimensional inferencetrend filtering
Cites Work
- Unnamed Item
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- Wild binary segmentation for multiple change-point detection
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Properties and refinements of the fused Lasso
- Bayesian linear regression with sparse priors
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Locally adaptive regression splines
- On stepwise pattern recovery of the fused Lasso
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Nonparametric shape-restricted regression
- Bayesian fractional posteriors
- Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs
- Coverage of credible intervals in nonparametric monotone regression
- Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees
- Adaptive risk bounds in univariate total variation denoising and trend filtering
- On estimation of isotonic piecewise constant signals
- Empirical priors and coverage of posterior credible sets in a sparse normal mean model
- A general framework for Bayes structured linear models
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Data-driven priors and their posterior concentration rates
- General maximum likelihood empirical Bayes estimation of normal means
- Adaptive posterior contraction rates for the horseshoe
- Adaptive piecewise polynomial estimation via trend filtering
- Penalized regression, standard errors, and Bayesian Lassos
- On Bayesian Consistency
- $\ell_1$ Trend Filtering
- Sparsity and Smoothness Via the Fused Lasso
- Calibrating general posterior credible regions
- Posterior contraction in sparse generalized linear models
- Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior
- Robust Bayesian Inference via Coarsening
- OUP accepted manuscript
- Exact Bayesian regression of piecewise constant functions
- High-dimensional properties for empirical priors in linear regression with unknown error variance
- The Elements of Statistical Learning
This page was built for publication: Empirical priors and posterior concentration in a piecewise polynomial sequence model