Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
From MaRDI portal
Publication:5144778
DOI10.1287/opre.2019.1919zbMath1457.90153arXiv1803.01454OpenAlexW3048822613MaRDI QIDQ5144778
Hussein Hazimeh, Rahul Mazumder
Publication date: 19 January 2021
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.01454
mixed integer programmingsparsityhigh-dimensional statisticsLassocoordinate descentlarge-scale computationinterpretable machine learning
Related Items (25)
On the Convexification of Constrained Quadratic Optimization Problems with Indicator Variables ⋮ An Alternating Method for Cardinality-Constrained Optimization: A Computational Study for the Best Subset Selection and Sparse Portfolio Problems ⋮ MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression ⋮ The backbone method for ultra-high dimensional sparse machine learning ⋮ Subset selection in network-linked data ⋮ Grouped variable selection with discrete optimization: computational and statistical perspectives ⋮ HARFE: hard-ridge random feature expansion ⋮ Sparse quantile regression ⋮ On clustering and interpreting with rules by means of mathematical optimization ⋮ Comparing solution paths of sparse quadratic minimization with a Stieltjes matrix ⋮ Unnamed Item ⋮ Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect ⋮ Linear regression with partially mismatched data: local search with theoretical guarantees ⋮ Subset Selection and the Cone of Factor-Width-k Matrices ⋮ Rejoinder: ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons ⋮ The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty ⋮ A polynomial algorithm for best-subset selection problem ⋮ An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets ⋮ Randomized Gradient Boosting Machine ⋮ L0Learn ⋮ Sparse classification: a scalable discrete optimization perspective ⋮ Robust subset selection ⋮ Mining events with declassified diplomatic documents ⋮ Unnamed Item ⋮ Sparse regression at scale: branch-and-bound rooted in first-order optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- Best subset selection via a modern optimization lens
- Iterative hard thresholding methods for \(l_0\) regularized convex cone programming
- Statistics for high-dimensional data. Methods, theory and applications.
- Iterative hard thresholding for compressed sensing
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Hedonic housing prices and the demand for clean air
- The Lasso problem and uniqueness
- Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Random Coordinate Descent Methods for <inline-formula> <tex-math notation="TeX">$\ell_{0}$</tex-math></inline-formula> Regularized Convex Optimization
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Approximate Solutions to Linear Systems
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Regularization and Variable Selection Via the Elastic Net
- The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization
- On the Convergence of Block Coordinate Descent Type Methods
- Strong Rules for Discarding Predictors in Lasso-Type Problems
- Convergence of a block coordinate descent method for nondifferentiable minimization
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms