Sparse high-dimensional regression: exact scalable algorithms and phase transitions
From MaRDI portal
Publication:2176621
DOI10.1214/18-AOS1804zbMath1444.62094arXiv1709.10029MaRDI QIDQ2176621
Dimitris J. Bertsimas, Bart P. G. Van Parys
Publication date: 5 May 2020
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.10029
Related Items (47)
Sparse high-dimensional linear regression. Estimating squared error and a phase transition ⋮ Stochastic Cutting Planes for Data-Driven Optimization ⋮ An Alternating Method for Cardinality-Constrained Optimization: A Computational Study for the Best Subset Selection and Sparse Portfolio Problems ⋮ MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression ⋮ The backbone method for ultra-high dimensional sparse machine learning ⋮ Detecting racial bias in jury selection ⋮ A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure ⋮ A Scalable Algorithm for Sparse Portfolio Selection ⋮ Sparse high-dimensional regression: exact scalable algorithms and phase transitions ⋮ Grouped variable selection with discrete optimization: computational and statistical perspectives ⋮ HARFE: hard-ridge random feature expansion ⋮ A new perspective on low-rank optimization ⋮ Simultaneous feature selection and outlier detection with optimality guarantees ⋮ Learning sparse nonlinear dynamics via mixed-integer optimization ⋮ Sparsifying the least-squares approach to PCA: comparison of lasso and cardinality constraint ⋮ Sparse quantile regression ⋮ Unnamed Item ⋮ Subset Selection and the Cone of Factor-Width-k Matrices ⋮ Combinatorial optimization. Abstracts from the workshop held November 7--13, 2021 (hybrid meeting) ⋮ Certifiably optimal sparse inverse covariance estimation ⋮ Bicriteria algorithms to balance coverage and cost in team formation under online model ⋮ Sparse HP filter: finding kinks in the COVID-19 contact rate ⋮ Sparse regression: scalable algorithms and empirical performance ⋮ Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons ⋮ A discussion on practical considerations with sparse regression methodologies ⋮ Discussion of ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons ⋮ Modern variable selection in action: comment on the papers by HTT and BPV ⋮ A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al. ⋮ Rejoinder: ``Sparse regression: scalable algorithms and empirical performance ⋮ Scalable Algorithms for the Sparse Ridge Regression ⋮ Unnamed Item ⋮ A Unified Approach to Mixed-Integer Optimization Problems With Logical Constraints ⋮ The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty ⋮ Optimization problems for machine learning: a survey ⋮ An efficient optimization approach for best subset selection in linear regression, with application to model selection and fitting in autoregressive time-series ⋮ Sparse classification: a scalable discrete optimization perspective ⋮ Robust subset selection ⋮ Sparse Convex Regression ⋮ A Mixed-Integer Fractional Optimization Approach to Best Subset Selection ⋮ Sparse hierarchical regression with polynomials ⋮ Variable selection in convex quantile regression: \(\mathcal{L}_1\)-norm or \(\mathcal{L}_0\)-norm regularization? ⋮ Bicriteria streaming algorithms to balance gain and cost with cardinality constraint ⋮ Unnamed Item ⋮ Sparse regression at scale: branch-and-bound rooted in first-order optimization ⋮ Convex optimization under combinatorial sparsity constraints ⋮ Sparse regression over clusters: SparClur ⋮ Ideal formulations for constrained convex optimization problems with indicator variables
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Best subset selection via a modern optimization lens
- Statistics for high-dimensional data. Methods, theory and applications.
- On general minimax theorems
- Solving mixed integer nonlinear programs by outer approximation
- A brief history of linear and mixed-integer programming computation
- Sparse high-dimensional linear regression. Estimating squared error and a phase transition
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Sparse regression: scalable algorithms and empirical performance
- Branch-and-Price: Column Generation for Solving Huge Integer Programs
- Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
- An outer-approximation algorithm for a class of mixed-integer nonlinear programs
- Updating the Inverse of a Matrix
- Regressions by Leaps and Bounds
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Does $\ell _{p}$ -Minimization Outperform $\ell _{1}$ -Minimization?
- Matching pursuits with time-frequency dictionaries
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- On the Performance of Sparse Recovery Via $\ell_p$-Minimization $(0 \leq p \leq 1)$
- Regularization and Variable Selection Via the Elastic Net
- Branch-and-Bound Methods: A Survey
This page was built for publication: Sparse high-dimensional regression: exact scalable algorithms and phase transitions