Oracle inequalities for high-dimensional prediction
From MaRDI portal
Publication:1740524
DOI10.3150/18-BEJ1019zbMath1431.62284arXiv1608.00624OpenAlexW2963225834WikidataQ128265662 ScholiaQ128265662MaRDI QIDQ1740524
Lu Yu, Irina Gaynanova, Johannes Lederer
Publication date: 30 April 2019
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1608.00624
Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05) Inequalities; stochastic orderings (60E15)
Related Items (10)
Tuning-free ridge estimators for high-dimensional generalized linear models ⋮ Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm ⋮ Inference for high-dimensional instrumental variables regression ⋮ Statistical guarantees for regularized neural networks ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Prediction error bounds for linear regression with the TREX ⋮ Prediction and estimation consistency of sparse multi-class penalized optimal scoring ⋮ Tuning parameter calibration for \(\ell_1\)-regularized logistic regression ⋮ Tuning parameter calibration for personalized prediction in medicine
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- A lasso for hierarchical interactions
- Statistical significance in high-dimensional linear models
- New concentration inequalities for suprema of empirical processes
- A new perspective on least squares under convex constraint
- On higher order isotropy conditions and lower bounds for sparse quadratic forms
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- SLOPE-adaptive variable selection via convex optimization
- Sparse recovery in convex hulls via entropy penalization
- On the prediction loss of the Lasso in the partially labeled setting
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Weak convergence and empirical processes. With applications to statistics
- On the conditions used to prove oracle results for the Lasso
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- Slope meets Lasso: improved oracle bounds and optimality
- Simultaneous analysis of Lasso and Dantzig selector
- Optimal two-step prediction in regression
- Sparsity oracle inequalities for the Lasso
- Adaptive piecewise polynomial estimation via trend filtering
- Sparse Matrix Inversion with Scaled Lasso
- A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees
- How Correlations Influence Lasso Prediction
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Scaled sparse linear regression
- $\ell_1$ Trend Filtering
- Sparsity and Smoothness Via the Fused Lasso
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Regularization and Variable Selection Via the Elastic Net
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- Model Selection and Estimation in Regression with Grouped Variables
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
- The Lasso, correlated design, and improved oracle inequalities
- Introduction to nonparametric estimation
- High-dimensional regression with unknown variance
This page was built for publication: Oracle inequalities for high-dimensional prediction