Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
DOI10.1214/18-AOS1783zbMath1435.62037arXiv1811.06198MaRDI QIDQ2284379
Kyoungjae Lee, Lizhen Lin, Jaeyong Lee
Publication date: 15 January 2020
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.06198
covariance matrixprecision matrixDAG modelCholesky factorposterior convergence ratestrong model selection consistency
Asymptotic properties of parametric estimators (62F12) Estimation in multivariate analysis (62H12) Bayesian inference (62F15) Minimax procedures in statistical decision theory (62C20)
Related Items (15)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Bayesian variable selection with shrinking and diffusing priors
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Asymptotic normality and optimalities in estimation of large Gaussian graphical models
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- High dimensional covariance matrix estimation using a factor model
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
- Posterior convergence rates for estimating large precision matrices using graphical models
- Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
- On the computational complexity of high-dimensional Bayesian variable selection
- Statistics for high-dimensional data. Methods, theory and applications.
- Optimal rates of convergence for sparse covariance matrix estimation
- Bayesian linear regression with sparse priors
- High dimensional posterior convergence rates for decomposable graphical models
- Optimal rates of convergence for covariance matrix estimation
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Bayesian fractional posteriors
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Adaptive covariance matrix estimation through block thresholding
- High dimensional sparse covariance estimation via directed acyclic graphs
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
- Bayesian structure learning in graphical models
- Consistency of Bayesian linear model selection with a growing number of parameters
- Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors
- Optimal estimation and rank detection for sparse spiked covariance matrices
- Rate-optimal posterior contraction for sparse PCA
- Regularized estimation of large covariance matrices
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- On Bayesian Consistency
- A new approach to Cholesky-based covariance regularization in high dimensions
- A study of error variance estimation in Lasso regression
- Mixtures of g Priors for Bayesian Variable Selection
- Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings
- Cholesky decomposition of a hyper inverse Wishart matrix
- Calibrating general posterior credible regions
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Estimating Large Precision Matrices via Modified Cholesky Decomposition
- Robust Bayesian Inference via Coarsening
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Learning Local Dependence In Ordered Data
- Covariance matrix selection and estimation via penalised normal likelihood
This page was built for publication: Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors