On the differences between \(L_2\) boosting and the Lasso
From MaRDI portal
Publication:2288790
DOI10.1016/j.spl.2019.108634zbMath1459.62137arXiv1812.05421OpenAlexW2976603259MaRDI QIDQ2288790
Publication date: 20 January 2020
Published in: Statistics \& Probability Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1812.05421
high-dimensional linear modelsrestricted eigenvalue condition\(L_2\) boostingparameter recovery/estimationrestricted nullspace property
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Signal representation using adaptive normalized Gaussian functions
- Least angle regression. (With discussion)
- Weak greedy algorithms
- Simultaneous analysis of Lasso and Dantzig selector
- Forward stagewise regression and the monotone lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Boosting for high-dimensional linear models
- Compressed sensing and best 𝑘-term approximation
- Greedy Approximation
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- Sparse representations in unions of bases
- Greed is Good: Algorithmic Results for Sparse Approximation
- Atomic Decomposition by Basis Pursuit
- Uncertainty principles and ideal atomic decomposition
- On sparse representation in pairs of bases
- Matching pursuits with time-frequency dictionaries
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization