Sparse classification: a scalable discrete optimization perspective
From MaRDI portal
Publication:2071494
DOI10.1007/s10994-021-06085-5OpenAlexW3208154528WikidataQ120689923 ScholiaQ120689923MaRDI QIDQ2071494
Jean Pauphilet, Bart P. G. Van Parys, Dimitris J. Bertsimas
Publication date: 28 January 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.01352
Related Items (5)
Sparse convex optimization toolkit: a mixed-integer framework ⋮ Distributed primal outer approximation algorithm for sparse convex programming with separable structures ⋮ Unnamed Item ⋮ Sparse regression at scale: branch-and-bound rooted in first-order optimization ⋮ Sparse regression over clusters: SparClur
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Sure independence screening in generalized linear models with NP-dimensionality
- Nearly unbiased variable selection under minimax concave penalty
- Best subset selection via a modern optimization lens
- Dual coordinate descent methods for logistic regression and maximum entropy models
- Pegasos: primal estimated sub-gradient solver for SVM
- Support recovery without incoherence: a case for nonconvex regularization
- Characterization of the equivalence of robustification and regularization in linear and matrix regression
- One-step sparse estimates in nonconcave penalized likelihood models
- An algorithmic framework for convex mixed integer nonlinear programs
- Solving mixed integer nonlinear programs by outer approximation
- False discoveries occur early on the Lasso path
- Logistic regression: from art to science
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Support vector machines are universally consistent
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Support-vector networks
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Sparse regression: scalable algorithms and empirical performance
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Sparse learning via Boolean relaxations
- Multi-stage convex relaxation for feature selection
- Variable selection using MM algorithms
- One-Bit Compressed Sensing by Linear Programming
- Optimization with Sparsity-Inducing Penalties
- Limits on Support Recovery With Probabilistic Models: An Information-Theoretic Framework
- Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- SparseNet: Coordinate Descent With Nonconvex Penalties
- The Cutting-Plane Method for Solving Convex Programs
- Computing in Operations Research Using Julia
- Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
- An outer-approximation algorithm for a class of mixed-integer nonlinear programs
- Projected gradient methods for linearly constrained problems
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Regression Shrinkage and Selection via The Lasso: A Retrospective
- A Statistical View of Some Chemometrics Regression Tools
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Sparse Approximate Solutions to Linear Systems
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
- Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- JuMP: A Modeling Language for Mathematical Optimization
- A fast dual algorithm for kernel logistic regression
- Gene selection for cancer classification using support vector machines
This page was built for publication: Sparse classification: a scalable discrete optimization perspective