MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression
From MaRDI portal
Publication:5066443
DOI10.1080/10618600.2020.1845184OpenAlexW3102593406MaRDI QIDQ5066443
Ana Kenney, Francesca Chiaromonte, Giovanni Felici
Publication date: 29 March 2022
Published in: Journal of Computational and Graphical Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10618600.2020.1845184
Related Items (3)
The backbone method for ultra-high dimensional sparse machine learning ⋮ Simultaneous feature selection and outlier detection with optimality guarantees ⋮ Robust subset selection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse estimation of a covariance matrix
- Best subset selection via a modern optimization lens
- Mixed integer second-order cone programming formulations for variable selection in linear regression
- Relaxed Lasso
- Logistic regression: from art to science
- Least angle regression. (With discussion)
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Rejoinder: ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
- Random Coordinate Descent Methods for <inline-formula> <tex-math notation="TeX">$\ell_{0}$</tex-math></inline-formula> Regularized Convex Optimization
- The Group Lasso for Logistic Regression
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Regression Shrinkage and Selection via The Lasso: A Retrospective
- An overview of the estimation of large covariance and precision matrices
- Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
- Regularization and Variable Selection Via the Elastic Net
- Optimal Whitening and Decorrelation
This page was built for publication: MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression