Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection
From MaRDI portal
Publication:4960726
DOI10.1080/00949655.2018.1490418OpenAlexW1592324551MaRDI QIDQ4960726
Publication date: 23 April 2020
Published in: Journal of Statistical Computation and Simulation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1405.3319
horseshoeGibbs samplingMCMCfeature selectionhigh-dimensionalHamiltonian Monte Carlofully Bayesianheavy-tailed priornon-convex penaltieshyper-Lasso priors
Related Items
Sparse regularized fuzzy regression ⋮ A comparative study on high-dimensional bayesian regression with binary predictors ⋮ Tree ensembles with rule structured horseshoe regularization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Sparsity information and regularization in the horseshoe and other shrinkage priors
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- The horseshoe estimator: posterior concentration around nearly black vectors
- A weakly informative default prior distribution for logistic and other regression models
- Tree ensembles with rule structured horseshoe regularization
- Variable selection in qualitative models via an entropic explanatory power
- Comparison of Bayesian predictive methods for model selection
- Good, great, or lucky? Screening for firms with sustained superior performance using heavy-tailed priors
- On the half-Cauchy prior for a global scale parameter
- Penalized regression, standard errors, and Bayesian Lassos
- BAYESIAN HYPER-LASSOS WITH NON-CONVEX PENALIZATION
- The horseshoe estimator for sparse signals
- Model choice in generalised linear models: a Bayesian approach via Kullback-Leibler projections
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Multivariate T-Distributions and Their Applications
- Adaptive Rejection Sampling for Gibbs Sampling
- Bias-Corrected Hierarchical Bayesian Classification With a Selected Subset of High-Dimensional Features
- Local Shrinkage Rules, Lévy Processes and Regularized Regression
- Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper)