Nonlinear Variable Selection via Deep Neural Networks
From MaRDI portal
Publication:5066407
DOI10.1080/10618600.2020.1814305OpenAlexW3081750857MaRDI QIDQ5066407
No author found.
Publication date: 29 March 2022
Published in: Journal of Computational and Graphical Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10618600.2020.1814305
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- BART: Bayesian additive regression trees
- Nearly unbiased variable selection under minimax concave penalty
- Variable selection for BART: an application to gene regulation
- Sparsity in multiple kernel learning
- Tight conditions for consistency of variable selection in the context of high dimensionality
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Estimating the dimension of a model
- Double sparsity kernel learning with automatic variable selection and data extraction
- Least angle regression. (With discussion)
- Detection of sparse additive functions
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Extended Bayesian information criteria for model selection with large model spaces
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Bayesian Neural Networks for Selection of Drug Sensitive Genes
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- 10.1162/153244303321897690
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Composite Likelihood Bayesian Information Criteria for Model Selection in High-Dimensional Data
- Regularization and Variable Selection Via the Elastic Net
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Greedy Sparsity-Constrained Optimization
- Some Comments on C P
- Random forests
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Nonlinear Variable Selection via Deep Neural Networks