A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al.
From MaRDI portal
Publication:2225318
DOI10.1214/20-STS809OpenAlexW3098688965MaRDI QIDQ2225318
Armeen Taeb, Yuansi Chen, Peter Bühlmann
Publication date: 8 February 2021
Published in: Statistical Science (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.ss/1605603635
latent variablesvariable selectiondistributional robustnesshigh-dimensional estimationlow-rank estimation
Related Items (2)
A literature review of (Sparse) exponential family PCA ⋮ Rejoinder: ``Sparse regression: scalable algorithms and empirical performance
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- The Adaptive Lasso and Its Oracle Properties
- Exact Spike Train Inference Via $\ell_0$ Optimization
- Best subset selection via a modern optimization lens
- \(\ell_{0}\)-penalized maximum likelihood for sparse directed acyclic graphs
- Statistics for high-dimensional data. Methods, theory and applications.
- Characterization of the equivalence of robustification and regularization in linear and matrix regression
- Algorithm for cardinality-constrained quadratic optimization
- Efficient algorithms for computing the best subset regression models for large-scale problems
- Relaxed Lasso
- Breakthroughs in statistics. Volume I: Foundations and basic theory
- Estimating the dimension of a model
- Risk bounds for model selection via penalization
- Heuristics of instability and stabilization in model selection
- A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization
- Statistics for big data: a perspective
- Logistic regression: from art to science
- Least angle regression. (With discussion)
- On the conditions used to prove oracle results for the Lasso
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Simultaneous analysis of Lasso and Dantzig selector
- Boosting for high-dimensional linear models
- Atomic Decomposition by Basis Pursuit
- Better Subset Regression Using the Nonnegative Garrote
- On Sparse Representations in Arbitrary Redundant Bases
- Greed is Good: Algorithmic Results for Sparse Approximation
- 10.1162/153244303321897717
- Matching pursuits with time-frequency dictionaries
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sparse Recovery With Orthogonal Matching Pursuit Under RIP
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Robust Regression and Lasso
- Regularization and Variable Selection Via the Elastic Net
- Certifiably Optimal Low Rank Factor Analysis
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
This page was built for publication: A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al.