The backbone method for ultra-high dimensional sparse machine learning
From MaRDI portal
Publication:2163249
DOI10.1007/s10994-021-06123-2OpenAlexW3035464515WikidataQ120689903 ScholiaQ120689903MaRDI QIDQ2163249
Vassilis jun. Digalakis, Dimitris J. Bertsimas
Publication date: 10 August 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.06592
decision treesfeature selectionsparse regressionmixed integer optimizationsparse machine learningultra-high dimensional machine learning
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Sure independence screening in generalized linear models with NP-dimensionality
- Nearly unbiased variable selection under minimax concave penalty
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Best subset selection via a modern optimization lens
- Integer programming models for feature selection: new extensions and a randomized solution algorithm
- Supersparse linear integer models for optimized medical scoring systems
- Characterization of the equivalence of robustification and regularization in linear and matrix regression
- Mathematical optimization in classification and regression trees
- Searching for backbones -- an efficient parallel algorithm for the traveling salesman problem
- Learning Boolean concepts in the presence of many irrelevant features
- Enlarging the margins in perceptron decision trees
- A data-driven software tool for enabling cooperative information sharing among police departments
- Least angle regression. (With discussion)
- Optimization problems for machine learning: a survey
- The all-or-nothing phenomenon in sparse linear regression
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Sparse regression: scalable algorithms and empirical performance
- Sparsity in optimal randomized classification trees
- Sparse learning via Boolean relaxations
- Optimal randomized classification trees
- Branch-and-Price: Column Generation for Solving Huge Integer Programs
- Entropy-based model-free feature screening for ultrahigh-dimensional multiclass classification
- Nonparametric Independence Screening in Sparse Ultra-High-Dimensional Additive Models
- Multisurface method of pattern separation for medical diagnosis applied to breast cytology.
- An outer-approximation algorithm for a class of mixed-integer nonlinear programs
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- 10.1162/153244303322753616
- Sparse Approximate Solutions to Linear Systems
- Fifty Years of Classification and Regression Trees
- MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression
- Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
- Scalable Algorithms for the Sparse Ridge Regression
- Robust Regression and Lasso
- Regularization and Variable Selection Via the Elastic Net
- A Split-and-Merge Bayesian Variable Selection Approach for Ultrahigh Dimensional Regression
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Optimal classification trees
- Random forests
- Gene selection for cancer classification using support vector machines
This page was built for publication: The backbone method for ultra-high dimensional sparse machine learning