Estimating Large Precision Matrices via Modified Cholesky Decomposition
From MaRDI portal
Publication:4986367
zbMath1464.62294arXiv1707.01143MaRDI QIDQ4986367
Publication date: 27 April 2021
Full work available at URL: https://arxiv.org/abs/1707.01143
Related Items (6)
Bayesian joint inference for multiple directed acyclic graphs ⋮ Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation ⋮ Scalable Bayesian high-dimensional local dependence learning ⋮ Precision matrix estimation under the horseshoe-like prior-penalty dual ⋮ Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors ⋮ Bayesian inference for high-dimensional decomposable graphs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- High dimensional covariance matrix estimation using a factor model
- Bernstein-von Mises theorems for functionals of the covariance matrix
- Minimax optimal estimation of general bandable covariance matrices
- Posterior convergence rates for estimating large precision matrices using graphical models
- Optimal rates of convergence for sparse covariance matrix estimation
- Estimation of functionals of sparse covariance matrices
- High dimensional posterior convergence rates for decomposable graphical models
- Optimal rates of convergence for covariance matrix estimation
- Posterior convergence rates of Dirichlet mixtures at smooth densities
- Covariance regularization by thresholding
- Optimal Bayesian minimax rates for unconstrained large covariance matrices
- Posterior graph selection and estimation consistency for high-dimensional Bayesian DAG models
- Convergence rates of posterior distributions.
- High dimensional sparse covariance estimation via directed acyclic graphs
- Adaptive estimation of covariance matrices via Cholesky decomposition
- A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
- Bayesian structure learning in graphical models
- Optimal estimation and rank detection for sparse spiked covariance matrices
- Rate-optimal posterior contraction for sparse PCA
- Law of log determinant of sample covariance matrix and optimal estimation of differential entropy for high-dimensional Gaussian distributions
- Regularized estimation of large covariance matrices
- Posterior contraction in sparse Bayesian factor models for massive covariance matrices
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
This page was built for publication: Estimating Large Precision Matrices via Modified Cholesky Decomposition