High-dimensional regression with unknown variance
From MaRDI portal
Publication:5965306
DOI10.1214/12-STS398zbMath1331.62346arXiv1109.5587MaRDI QIDQ5965306
Christophe Giraud, Sylvie Huet, Nicolas Verzelen
Publication date: 3 March 2016
Published in: Statistical Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1109.5587
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07)
Related Items (15)
How can we identify the sparsity structure pattern of high-dimensional data: an elementary statistical analysis to interpretable machine learning ⋮ A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates ⋮ Global-local mixtures: a unifying framework ⋮ Oracle inequalities for high-dimensional prediction ⋮ Optimal bounds for aggregation of affine estimators ⋮ A global homogeneity test for high-dimensional linear regression ⋮ Estimator selection in the Gaussian setting ⋮ Prediction error bounds for linear regression with the TREX ⋮ SOCP based variance free Dantzig selector with application to robust estimation ⋮ Introduction to the special issue on sparsity and regularization methods ⋮ High-dimensional regression with unknown variance ⋮ Second-order Stein: SURE for SURE and other applications in high-dimensional inference ⋮ Linear Hypothesis Testing in Dense High-Dimensional Linear Models ⋮ The Partial Linear Model in High Dimensions ⋮ A fast algorithm for the semi-definite relaxation of the state estimation problem in power grids
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimator selection in the Gaussian setting
- Detecting multiple change-points in the mean of Gaussian process by model selection
- Nearly unbiased variable selection under minimax concave penalty
- Graph Selection with GGMselect
- The Adaptive Lasso and Its Oracle Properties
- UPS delivers optimal phase diagram in high-dimensional variable selection
- Exponential screening and optimal rates of sparse estimation
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Optimal selection of reduced rank estimators of high-dimensional matrices
- \(\ell_{1}\)-penalization for mixture regression models
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Segmentation of the mean of heteroscedastic data via cross-validation
- Estimator selection with respect to Hellinger-type risks
- Oracle inequalities and optimal inference under group sparsity
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Slope heuristics: overview and implementation
- Asymptotic properties of criteria for selection of variables in multiple regression
- A survey of cross-validation procedures for model selection
- High-dimensional Gaussian model selection on a Gaussian design
- The benefit of group sparsity
- Mixing least-squares estimators when the variance is unknown
- Gaussian model selection with an unknown variance
- A simple proof of the restricted isometry property for random matrices
- Asymptotic optimality for \(C_ p\), \(C_ L\), cross-validation and generalized cross-validation: Discrete index set
- The \(L_1\) convergence of kernel density estimates
- Reduced-rank regression for the multivariate linear model
- Estimating the dimension of a model
- Risk bounds for model selection via penalization
- A transformation theorem for one-dimensional \(f\)-expansions
- A new algorithm for fixed design regression and denoising
- Least angle regression. (With discussion)
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Estimation of Gaussian graphs by model selection
- On the conditions used to prove oracle results for the Lasso
- Low rank multivariate regression
- Rank penalized estimators for high-dimensional matrices
- Minimal penalties for Gaussian model selection
- Simultaneous analysis of Lasso and Dantzig selector
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Asymptotically optimal difference-based estimation of variance in nonparametric regression
- Information Theory and Mixing Least-Squares Regressions
- Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
- The Bayesian Lasso
- An optimal selection of regression variables
- The Predictive Sample Reuse Method with Applications
- Atomic Decomposition by Basis Pursuit
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparsity and Smoothness Via the Fused Lasso
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Linear Model Selection by Cross-Validation
- Regularization and Variable Selection Via the Elastic Net
- Sparsity regret bounds for individual sequences in online linear regression
- Model Selection and Estimation in Regression with Grouped Variables
- Some Comments on C P
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Estimating Linear Restrictions on Regression Coefficients for Multivariate Normal Distributions
- Robust Statistics
- Compressed sensing
- Gaussian model selection
- High-dimensional regression with unknown variance
- Comments on: \(\ell _{1}\)-penalization for mixture regression models
This page was built for publication: High-dimensional regression with unknown variance