On the conditions used to prove oracle results for the Lasso
From MaRDI portal
Publication:1952029
DOI10.1214/09-EJS506zbMath1327.62425arXiv0910.0722OpenAlexW2092058109WikidataQ98839733 ScholiaQ98839733MaRDI QIDQ1952029
Sara van de Geer, Peter Bühlmann
Publication date: 27 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0910.0722
coherencecompatibilitysparsityLassoirrepresentable conditionrestricted eigenvaluerestricted isometry
Ridge regression; shrinkage estimators (Lasso) (62J07) Nonparametric estimation (62G05) General considerations in statistical decision theory (62C05)
Related Items
The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models, Markov Neighborhood Regression for High-Dimensional Inference, Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies, Counterfactual Analysis With Artificial Controls: Inference, High Dimensions, and Nonstationarity, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Poisson Regression With Error Corrupted High Dimensional Features, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Variable selection in partial linear regression with functional covariate, A component lasso, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Sparsest representations and approximations of an underdetermined linear system, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations, Binacox: automatic cut‐point detection in high‐dimensional Cox model with applications in genetics, Rejoinder to “Reader reaction to ‘Outcome‐adaptive Lasso: Variable selection for causal inference’ by Shortreed and Ertefaie (2017)”, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, High-Dimensional Gaussian Graphical Regression Models with Covariates, ESTIMATION OF A HIGH-DIMENSIONAL COUNTING PROCESS WITHOUT PENALTY FOR HIGH-FREQUENCY EVENTS, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Structure learning of exponential family graphical model with false discovery rate control, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Analysis of sparse recovery for Legendre expansions using envelope bound, An efficient GPU-parallel coordinate descent algorithm for sparse precision matrix estimation via scaled Lasso, Sparse quantile regression, A simple homotopy proximal mapping algorithm for compressive sensing, Double-estimation-friendly inference for high-dimensional misspecified models, Generalized matrix decomposition regression: estimation and inference for two-way structured data, Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression, False Discovery Rate Control via Data Splitting, Multi-Task Learning with High-Dimensional Noisy Images, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Two-stage communication-efficient distributed sparse M-estimation with missing data, Concentration of measure bounds for matrix-variate data with missing values, Bridging factor and sparse models, DIF statistical inference without knowing anchoring items, Goodness-of-Fit Tests for High Dimensional Linear Models, Recovery of partly sparse and dense signals, Multi-stage convex relaxation for feature selection, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, On the finite-sample analysis of \(\Theta\)-estimators, Unnamed Item, On the uniform convergence of empirical norms and inner products, with application to causal inference, On the finite-sample analysis of \(\Theta\)-estimators, Unnamed Item, Oracle inequalities for the Lasso in the additive hazards model with interval-censored data, Consistent parameter estimation for Lasso and approximate message passing, The Lasso for High Dimensional Regression with a Possible Change Point, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Prediction error bounds for linear regression with the TREX, Unnamed Item, Anℓ1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Sorted concave penalized regression, Strong oracle optimality of folded concave penalized estimation, Quasi-likelihood and/or robust estimation in high dimensions, A selective review of group selection in high-dimensional models, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Randomized maximum-contrast selection: subagging for large-scale regression, Structured analysis of the high-dimensional FMR model, A study on tuning parameter selection for the high-dimensional lasso, High-dimensional linear model selection motivated by multiple testing, Comments on: \(\ell _{1}\)-penalization for mixture regression models, A review of Gaussian Markov models for conditional independence, Sharp oracle inequalities for low-complexity priors, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, Sparse recovery from extreme eigenvalues deviation inequalities, A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Doubly penalized estimation in additive regression with high-dimensional data, Unnamed Item, Unnamed Item, The Dantzig selector for a linear model of diffusion processes, Weaker regularity conditions and sparse recovery in high-dimensional regression, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Penalized least squares estimation in the additive model with different smoothness for the components, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Fundamental barriers to high-dimensional regression with convex penalties, Canonical thresholding for nonsparse high-dimensional linear regression, On the prediction loss of the Lasso in the partially labeled setting, A general family of trimmed estimators for robust high-dimensional data analysis, Local linear smoothing for sparse high dimensional varying coefficient models, Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates, Fitting sparse linear models under the sufficient and necessary condition for model identification, Regularity properties for sparse regression, An analysis of penalized interaction models, High dimensional regression for regenerative time-series: an application to road traffic modeling, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors, On the post selection inference constant under restricted isometry properties, SLOPE is adaptive to unknown sparsity and asymptotically minimax, The variable selection by the Dantzig selector for Cox's proportional hazards model, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Extreme eigenvalues of nonlinear correlation matrices with applications to additive models, High-dimensional regression with potential prior information on variable importance, The \(l_q\) consistency of the Dantzig selector for Cox's proportional hazards model, Oracle inequalities for the lasso in the Cox model, Best subset binary prediction, Statistical significance in high-dimensional linear models, Generalized Kalman smoothing: modeling and algorithms, Impacts of high dimensionality in finite samples, The convex geometry of linear inverse problems, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Correlated variables in regression: clustering and sparse estimation, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Bayesian linear regression with sparse priors, \(\ell_{1}\)-penalization for mixture regression models, Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models, Efficient nonconvex sparse group feature selection via continuous and discrete optimization, Inference for high-dimensional instrumental variables regression, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Finite mixture regression: a sparse variable selection by model selection for clustering, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, A Rice method proof of the null-space property over the Grassmannian, High-dimensional additive hazards models and the lasso, Estimating networks with jumps, The Lasso problem and uniqueness, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Spatially-adaptive sensing in nonparametric regression, Sign-constrained least squares estimation for high-dimensional regression, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, Transductive versions of the Lasso and the Dantzig selector, Regularization for Cox's proportional hazards model with NP-dimensionality, Generalization of constraints for high dimensional regression problems, A general framework for Bayes structured linear models, A two-stage regularization method for variable selection and forecasting in high-order interaction model, A systematic review on model selection in high-dimensional regression, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., On higher order isotropy conditions and lower bounds for sparse quadratic forms, Normalized and standard Dantzig estimators: two approaches, Generalized M-estimators for high-dimensional Tobit I models, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, Oracle inequalities for high dimensional vector autoregressions, Robust inference on average treatment effects with possibly more covariates than observations, Oracle inequalities for high-dimensional prediction, Decomposable norm minimization with proximal-gradient homotopy algorithm, Additive model selection, Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Pivotal estimation via square-root lasso in nonparametric regression, Empirical Bayes oracle uncertainty quantification for regression, A Cluster Elastic Net for Multivariate Regression, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Sparse semiparametric discriminant analysis, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Sparse distance metric learning, Inference under Fine-Gray competing risks model with high-dimensional covariates, Exponential screening and optimal rates of sparse estimation, A global homogeneity test for high-dimensional linear regression, On asymptotically optimal confidence regions and tests for high-dimensional models, Greedy variance estimation for the LASSO, Regularized estimation in sparse high-dimensional time series models, Simultaneous feature selection and clustering based on square root optimization, Sparse space-time models: concentration inequalities and Lasso, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation, On the exponentially weighted aggregate with the Laplace prior, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Fast global convergence of gradient methods for high-dimensional statistical recovery, Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Control variate selection for Monte Carlo integration, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, High-dimensional inference for linear model with correlated errors, The finite sample properties of sparse M-estimators with pseudo-observations, In defense of the indefensible: a very naïve approach to high-dimensional inference, Robust subset selection, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
Cites Work
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector and sparsity oracle inequalities
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Near-ideal model selection by \(\ell _{1}\) minimization
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Extreme Eigenvalues of Toeplitz Forms and Applications to Elliptic Difference Equations
- Decoding by Linear Programming
- Shifting Inequality and Recovery of Sparse Signals
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- On Recovery of Sparse Signals Via $\ell _{1}$ Minimization
- Stable Recovery of Sparse Signals and an Oracle Inequality
- Sparse Density Estimation with ℓ1 Penalties