A Tuning-free Robust and Efficient Approach to High-dimensional Regression
From MaRDI portal
Publication:5146020
DOI10.1080/01621459.2020.1840989zbMath1452.62525OpenAlexW3094117317MaRDI QIDQ5146020
Yunan N. Wu, Lan Wang, Jelena Bradic, Bo Peng, Run-Ze Li
Publication date: 22 January 2021
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/01621459.2020.1840989
Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Robustness and adaptive procedures (parametric inference) (62F35)
Related Items
TFRE, Sparse Laplacian shrinkage for nonparametric transformation survival model, Rate-optimal robust estimation of high-dimensional vector autoregressive models, High-dimensional robust inference for censored linear models, A semi-parametric approach to feature selection in high-dimensional linear regression models, Robust high-dimensional tuning free multiple testing, Consistent Estimation of the Number of Communities via Regularized Network Embedding
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Estimating the error variance in a high-dimensional linear model
- Minimum distance Lasso for robust high-dimensional regression
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- Rank-based inference for the single-index model
- Statistics for high-dimensional data. Methods, theory and applications.
- Probability in Banach spaces. Isoperimetry and processes
- Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Rank regression with estimated scores
- Weak convergence and empirical processes. With applications to statistics
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Prediction error bounds for linear regression with the TREX
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Ranking and empirical minimization of \(U\)-statistics
- Sparsity oracle inequalities for the Lasso
- Calibrating nonconvex penalized regression in ultra-high dimension
- Adaptive robust variable selection
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- A permutation approach for selecting the penalty parameter in penalized model selection
- Variance estimation in high-dimensional linear models
- Shrinkage Tuning Parameter Selection with a Diverging number of Parameters
- Sparse Matrix Inversion with Scaled Lasso
- A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees
- How Correlations Influence Lasso Prediction
- Local Rank Inference for Varying Coefficient Models
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Scaled sparse linear regression
- Extended Bayesian information criteria for model selection with large model spaces
- Adaptive Huber Regression
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- Weighted Wilcoxon‐Type Smoothly Clipped Absolute Deviation Method
- A resampling method based on pivotal estimating functions
- Estimating Regression Coefficients by Minimizing the Dispersion of the Residuals
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Penalized Composite Quasi-Likelihood for Ultrahigh Dimensional Variable Selection
- Variance Estimation Using Refitted Cross-Validation in Ultrahigh Dimensional Regression
- Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
- Model Selection via Bayesian Information Criterion for Quantile Regression Models
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Robust Estimation via Robust Gradient Estimation
- The restricted consistency property of leave-nv-out cross-validation for high-dimensional variable selection
- Risk consistency of cross-validation with Lasso-type procedures
- Robust Variable Selection With Exponential Squared Loss
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- Robust and consistent variable selection in high-dimensional generalized linear models
- Tuning parameter selectors for the smoothly clipped absolute deviation method
- Tuning Parameter Selection in High Dimensional Penalized Likelihood
- A general theory of concave regularization for high-dimensional sparse estimation problems