Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation
From MaRDI portal
Publication:2156815
DOI10.1016/j.jspi.2022.03.003OpenAlexW3159784232WikidataQ114154284 ScholiaQ114154284MaRDI QIDQ2156815
Malay Ghosh, Yisha Yao, Ruo-Yang Zhang
Publication date: 20 July 2022
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.12060
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse inverse covariance estimation with the graphical lasso
- Posterior convergence rates for estimating large precision matrices using graphical models
- The horseshoe estimator: posterior concentration around nearly black vectors
- TIGER: A tuning-insensitive approach for optimally estimating Gaussian graphical models
- Bayesian linear regression with sparse priors
- High dimensional posterior convergence rates for decomposable graphical models
- Sparsistency and rates of convergence in large covariance matrix estimation
- Computing a nearest symmetric positive semidefinite matrix
- Asymptotic normality of posterior distributions in high-dimensional linear models
- Heuristics of instability and stabilization in model selection
- Normal approximation to the posterior distribution for generalized linear models with many covariates
- Asymptotic normality of posterior distributions for exponential families when the number of parameters tends to infinity.
- On the contraction properties of some high-dimensional quasi-posterior distributions
- The horseshoe+ estimator of ultra-sparse signals
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- Convergence rates of posterior distributions.
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Sparse permutation invariant covariance estimation
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Empirical Bayes oracle uncertainty quantification for regression
- Bayesian structure learning in graphical models
- Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors
- Bayesian shrinkage towards sharp minimaxity
- Network exploration via the adaptive LASSO and SCAD penalties
- Quasi-Bayesian estimation of large Gaussian graphical models
- Contraction properties of shrinkage priors in logistic regression
- GPU-accelerated Gibbs sampling: a case study of the horseshoe probit model
- Minimax-optimal nonparametric regression in high dimensions
- Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Bayesian graphical Lasso models and efficient posterior computation
- On the half-Cauchy prior for a global scale parameter
- Inference with normal-gamma prior distributions in regression problems
- Joint High‐Dimensional Bayesian Variable and Covariance Selection with an Application to eQTL Analysis
- Posterior consistency in linear models under shrinkage priors
- Sparse Matrix Inversion with Scaled Lasso
- Calibrated Precision Matrix Estimation for High-Dimensional Elliptical Distributions
- Scaled sparse linear regression
- Objective Bayesian model selection in Gaussian graphical models
- Model selection and estimation in the Gaussian graphical model
- The horseshoe estimator for sparse signals
- The Bayesian Lasso
- Bayesian lasso regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Generalized double Pareto shrinkage
- Estimating Large Precision Matrices via Modified Cholesky Decomposition
- An overview of the estimation of large covariance and precision matrices
- Dirichlet–Laplace Priors for Optimal Shrinkage
- A Convex Pseudolikelihood Framework for High Dimensional Partial Correlation Estimation with Convergence Guarantees
- Sparse precision matrix estimation via lasso penalized D-trace loss
- Covariance matrix selection and estimation via penalised normal likelihood
- Adaptive Bayesian density regression for high-dimensional data
- The Graphical Horseshoe Estimator for Inverse Covariance Matrices
This page was built for publication: Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation