Iteratively reweighted \(\ell_1\)-penalized robust regression
From MaRDI portal
Publication:2044416
DOI10.1214/21-EJS1862zbMath1472.62116arXiv1907.04027OpenAlexW3177098548MaRDI QIDQ2044416
Xiaoou Pan, Qiang Sun, Wen-Xin Zhou
Publication date: 9 August 2021
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.04027
convex relaxationoracle propertynonconvex regularizationheavy-tailed noiseoracle rateadaptive Huber regressionoptimization error
Ridge regression; shrinkage estimators (Lasso) (62J07) Nonparametric robustness (62G35) Foundations and philosophical topics in statistics (62A01) Statistics of extreme values; tail inference (62G32)
Related Items
Robust variable selection and estimation via adaptive elastic net S-estimators for linear regression ⋮ Retire: robust expectile regression in high dimensions ⋮ Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates ⋮ Renewable Huber estimation method for streaming datasets ⋮ Robust projected principal component analysis for large-dimensional semiparametric factor modeling ⋮ High-dimensional robust approximated \(M\)-estimators for mean regression with asymmetric data
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Geometric median and robust estimation in Banach spaces
- The Adaptive Lasso and Its Oracle Properties
- Statistics for high-dimensional data. Methods, theory and applications.
- Support recovery without incoherence: a case for nonconvex regularization
- Parametric estimation. Finite sample theory
- SLOPE-adaptive variable selection via convex optimization
- One-step sparse estimates in nonconcave penalized likelihood models
- Random generation of combinatorial structures from a uniform distribution
- Regularization and the small-ball method. I: Sparse recovery
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Challenging the empirical mean and empirical variance: a deviation study
- Robust regression through the Huber's criterion and adaptive lasso penalty
- Slope meets Lasso: improved oracle bounds and optimality
- The landscape of empirical risk for nonconvex losses
- Learning from MOM's principles: Le Cam's approach
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Robust statistical learning with Lipschitz and convex loss functions
- Robust machine learning by median-of-means: theory and practice
- Robust inference via multiplier bootstrap
- Sparse regression: scalable algorithms and empirical performance
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Risk minimization by median-of-means tournaments
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Mean estimation and regression under heavy-tailed distributions: A survey
- Regularization, sparse recovery, and median-of-means tournaments
- High-dimensional graphs and variable selection with the Lasso
- Financial Data and the Skewed Generalized T Distribution
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Scaled sparse linear regression
- Adaptive Huber Regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- High-Dimensional Statistics
- High-Dimensional Probability
- Likelihood-Based Selection and Sharp Parameter Estimation
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- A New Principle for Tuning-Free Huber Regression
- Sharp Oracle Inequalities for High-Dimensional Matrix Prediction
- Regularization and Variable Selection Via the Elastic Net
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- Model Selection and Estimation in Regression with Grouped Variables
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- Robust Estimation of a Location Parameter
- A general theory of concave regularization for high-dimensional sparse estimation problems