Neuronized Priors for Bayesian Sparse Linear Regression
From MaRDI portal
Publication:6110693
DOI10.1080/01621459.2021.1876710zbMath1515.62070arXiv1810.00141OpenAlexW3123637929MaRDI QIDQ6110693
No author found.
Publication date: 6 July 2023
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.00141
Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Bayesian problems; characterization of Bayes procedures (62C10)
Related Items (3)
Horseshoe Regularisation for Machine Learning in Complex and Deep Models1 ⋮ On posterior consistency of Bayesian factor models in high dimensions ⋮ Bayesian Bootstrap Spike-and-Slab LASSO
Cites Work
- Optimization by Simulated Annealing
- Reversible jump Markov chain Monte Carlo computation and Bayesian model determination
- Bayesian variable selection with shrinking and diffusing priors
- Bayesian variable selection regression for genome-wide association studies and other large-scale problems
- Conditions for posterior contraction in the sparse normal means problem
- On Bayesian model and variable selection using MCMC
- The horseshoe estimator: posterior concentration around nearly black vectors
- Statistics for high-dimensional data. Methods, theory and applications.
- Kernel estimators of asymptotic variance for adaptive Markov chain Monte Carlo
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- A majorization-minimization approach to variable selection using spike and slab priors
- Bayesian linear regression with sparse priors
- General state space Markov chains and MCMC algorithms
- Estimating the dimension of a model
- Geometric ergodicity of Metropolis algorithms
- Lasso, fractional norm and structured sparse estimation using a Hadamard product parametrization
- Asymptotic optimality of one-group shrinkage priors in sparse high-dimensional problems
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- Choice of hierarchical priors: Admissibility in estimation of normal means
- Rates of convergence of the Hastings and Metropolis algorithms
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Variance prior forms for high-dimensional Bayesian variable selection
- Nearly optimal Bayesian shrinkage for high-dimensional regression
- Nonparametric Independence Screening in Sparse Ultra-High-Dimensional Additive Models
- Extended Bayesian information criteria for model selection with large model spaces
- Fixed-Width Output Analysis for Markov Chain Monte Carlo
- The horseshoe estimator for sparse signals
- The Bayesian Lasso
- Bayesian Variable Selection in Linear Regression
- Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms
- Convergence of Slice Sampler Markov Chains
- The Collapsed Gibbs Sampler in Bayesian Computations with Applications to a Gene Regulation Problem
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Generalised Gibbs sampler and multigrid Monte Carlo for Bayesian computation
- Parameter Expansion for Data Augmentation
- The Multiple-Try Method and Local Optimization in Metropolis Sampling
- On the use of Non-Local Prior Densities in Bayesian Hypothesis Tests
- Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings
- The Spike-and-Slab LASSO
- Generalized double Pareto shrinkage
- Bayesian Model Selection in High-Dimensional Settings
- EMVS: The EM Approach to Bayesian Variable Selection
- Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection
- Shotgun Stochastic Search for “Largep” Regression
- Posterior model probabilities via path‐based pairwise priors
- Learning representations by back-propagating errors
- Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective
- Dirichlet–Laplace Priors for Optimal Shrinkage
- Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables
- Smoothly Clipped Absolute Deviation on High Dimensions
- On the Geometric Ergodicity of Two-Variable Gibbs Samplers
- Exploration, normalization, and summaries of high density oligonucleotide array probe level data
- Component-wise Markov chain Monte Carlo: uniform and geometric ergodicity under mixing and composition
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Neuronized Priors for Bayesian Sparse Linear Regression