Precision matrix estimation under the horseshoe-like prior-penalty dual
From MaRDI portal
Publication:6200870
DOI10.1214/23-ejs2196arXiv2104.10750OpenAlexW3153760897WikidataQ129454896 ScholiaQ129454896MaRDI QIDQ6200870
Ksheera Sagar, Sayantan Banerjee, Jyotishka Datta, Anindya Bhadra
Publication date: 25 March 2024
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.10750
Asymptotic properties of parametric estimators (62F12) Estimation in multivariate analysis (62H12) Bayesian inference (62F15)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparsity information and regularization in the horseshoe and other shrinkage priors
- Sparse inverse covariance estimation with the graphical lasso
- Bayesian structure learning in sparse Gaussian graphical models
- Scaling it up: stochastic search structure learning in graphical models
- Optimal shrinkage estimation of mean parameters in family of distributions with quadratic variance
- Covariance estimation: the GLM and regularization perspectives
- Posterior convergence rates for estimating large precision matrices using graphical models
- Bayesian linear regression with sparse priors
- Coordinate descent algorithm for covariance graphical Lasso
- High dimensional posterior convergence rates for decomposable graphical models
- One-step sparse estimates in nonconcave penalized likelihood models
- Sparsistency and rates of convergence in large covariance matrix estimation
- Ridge estimation of inverse covariance matrices from high-dimensional data
- Uncertainty quantification for the horseshoe (with discussion)
- Convergence rates of posterior distributions.
- Sparse permutation invariant covariance estimation
- Bayesian structure learning in graphical models
- The beta-mixture shrinkage prior for sparse covariances with near-minimax posterior convergence rate
- Network exploration via the adaptive LASSO and SCAD penalties
- Lasso meets horseshoe: a survey
- Regularized estimation of large covariance matrices
- Bayesian graphical Lasso models and efficient posterior computation
- Shrinkage estimation of large dimensional precision matrix using random matrix theory
- A Constrainedℓ1Minimization Approach to Sparse Precision Matrix Estimation
- Maximum likelihood estimation via the ECM algorithm: A general framework
- The horseshoe estimator for sparse signals
- The Bayesian Lasso
- Normal Variance-Mean Mixtures and z Distributions
- Hyper Inverse Wishart Distribution for Non-decomposable Graphs and its Application to Bayesian Inference for Gaussian Graphical Models
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Regions in the Complex Plane Containing the Eigenvalues of a Matrix
- Cholesky decomposition of a hyper inverse Wishart matrix
- Estimating Large Precision Matrices via Modified Cholesky Decomposition
- The G-Wishart Weighted Proposal Algorithm: Efficient Posterior Computation for Gaussian Graphical Models
- An overview of the estimation of large covariance and precision matrices
- Bayesian Regularization for Graphical Models With Unequal Shrinkage
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Regularization and Variable Selection Via the Elastic Net
- Dirichlet–Laplace Priors for Optimal Shrinkage
- Default Bayesian analysis with global-local shrinkage priors
- Sparse precision matrix estimation via lasso penalized D-trace loss
- The Graphical Horseshoe Estimator for Inverse Covariance Matrices
This page was built for publication: Precision matrix estimation under the horseshoe-like prior-penalty dual