Scaled sparse linear regression
From MaRDI portal
Publication:3143465
DOI10.1093/biomet/ass043zbMath1452.62515arXiv1104.4595OpenAlexW2154972590MaRDI QIDQ3143465
Publication date: 30 November 2012
Published in: Biometrika (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1104.4595
iterative algorithmscale invariancelinear regressionconvex minimizationvariance estimationoracle inequalitypenalized least squaresestimation after model selection
Related Items
Adaptive robust estimation in sparse vector model, Edge detection in sparse Gaussian graphical models, Significance testing in non-sparse high-dimensional linear models, On the prediction loss of the Lasso in the partially labeled setting, Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, A nonparametric empirical Bayes approach to large-scale multivariate regression, Confidence intervals for high-dimensional partially linear single-index models, De-biasing the Lasso with degrees-of-freedom adjustment, Matrix completion via max-norm constrained optimization, On estimation of the diagonal elements of a sparse precision matrix, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Testing a single regression coefficient in high dimensional linear models, Regression analysis for microbiome compositional data, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, AIC for the Lasso in generalized linear models, Ridge regression revisited: debiasing, thresholding and bootstrap, Kernel-penalized regression for analysis of microbiome data, High-dimensional regression with potential prior information on variable importance, Hypothesis Testing in High-Dimensional Instrumental Variables Regression With an Application to Genomics Data, Hierarchical correction of \(p\)-values via an ultrametric tree running Ornstein-Uhlenbeck process, Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation, High-dimensional multivariate posterior consistency under global-local shrinkage priors, Variance estimation for sparse ultra-high dimensional varying coefficient models, Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance, Balanced estimation for high-dimensional measurement error models, High-dimensional tests for functional networks of brain anatomic regions, Confidence regions for entries of a large precision matrix, Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation, Statistical significance in high-dimensional linear models, Penalised robust estimators for sparse and high-dimensional linear models, Perspective functions: proximal calculus and applications in high-dimensional statistics, Accuracy assessment for high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Correlated variables in regression: clustering and sparse estimation, Estimation of the \(\ell_2\)-norm and testing in sparse linear regression with unknown variance, Debiasing the debiased Lasso with bootstrap, Correcting for unknown errors in sparse high-dimensional function approximation, Blessing of massive scale: spatial graphical model estimation with a total cardinality constraint approach, Double-estimation-friendly inference for high-dimensional misspecified models, Scalable interpretable learning for multi-response error-in-variables regression, Prediction error after model search, Finite mixture regression: a sparse variable selection by model selection for clustering, SLOPE-adaptive variable selection via convex optimization, Adapting to unknown noise level in sparse deconvolution, Goodness-of-Fit Tests for High Dimensional Linear Models, Global-local mixtures: a unifying framework, Linear regression with sparsely permuted data, Penalized estimation in high-dimensional hidden Markov models with state-specific graphical models, Ill-posed estimation in high-dimensional models with instrumental variables, Covariate assisted screening and estimation, Selecting massive variables using an iterated conditional modes/medians algorithm, Oracle inequalities for high-dimensional prediction, Improved bounds for square-root Lasso and square-root slope, Optimal bounds for aggregation of affine estimators, Innovated scalable efficient inference for ultra-large graphical models, Debiasing the Lasso: optimal sample size for Gaussian designs, Adaptive estimation of high-dimensional signal-to-noise ratios, Selective inference with a randomized response, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Robust subspace clustering, Pivotal estimation via square-root lasso in nonparametric regression, On tight bounds for the Lasso, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, A global homogeneity test for high-dimensional linear regression, On asymptotically optimal confidence regions and tests for high-dimensional models, Prediction error bounds for linear regression with the TREX, Robust feature screening for elliptical copula regression model, Testing for high-dimensional network parameters in auto-regressive models, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Sorted concave penalized regression, Perspective maximum likelihood-type estimation via proximal decomposition, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, Variance prior forms for high-dimensional Bayesian variable selection, Lasso meets horseshoe: a survey, On the exponentially weighted aggregate with the Laplace prior, A study on tuning parameter selection for the high-dimensional lasso, Tuning-Free Heterogeneity Pursuit in Massive Networks, Iteratively reweighted \(\ell_1\)-penalized robust regression, A permutation approach for selecting the penalty parameter in penalized model selection, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Gaussian graphical model estimation with false discovery rate control, Optimal designs in sparse linear models, Evaluating visual properties via robust HodgeRank, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, A two-stage sequential conditional selection approach to sparse high-dimensional multivariate regression models, Sharp oracle inequalities for low-complexity priors, Non-Convex Global Minimization and False Discovery Rate Control for the TREX, Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals, A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Sharp Oracle Inequalities for Square Root Regularization, In defense of the indefensible: a very naïve approach to high-dimensional inference, Nonsparse Learning with Latent Variables, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, The Dantzig selector for a linear model of diffusion processes, Linear Hypothesis Testing in Dense High-Dimensional Linear Models, Two-sample testing of high-dimensional linear regression coefficients via complementary sketching, Improved estimators for semi-supervised high-dimensional regression model, Covariate-adjusted Gaussian graphical model estimation with false discovery rate control, Targeted Inference Involving High-Dimensional Data Using Nuisance Penalized Regression, A new reproducing kernel‐based nonlinear dimension reduction method for survival data, Variance estimation in high-dimensional linear regression via adaptive elastic-net, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, An efficient GPU-parallel coordinate descent algorithm for sparse precision matrix estimation via scaled Lasso, Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage, Scalable and efficient inference via CPE, Inference for High-Dimensional Linear Mixed-Effects Models: A Quasi-Likelihood Approach, A dual semismooth Newton based augmented Lagrangian method for large-scale linearly constrained sparse group square-root Lasso problems, Debiasing convex regularized estimators and interval estimation in linear models, Generalized matrix decomposition regression: estimation and inference for two-way structured data, Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, A semi-parametric approach to feature selection in high-dimensional linear regression models, Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect, An integrated surrogate model constructing method: annealing combinable Gaussian process, StarTrek: combinatorial variable selection with false discovery rate control, Sparse additive models in high dimensions with wavelets, Inference on Multi-level Partial Correlations Based on Multi-subject Time Series Data, Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares, Unnamed Item, Inference robust to outliers with ℓ1-norm penalization, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Discussion of “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”, Comment on “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”, Rejoinder to “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”, Paths Following Algorithm for Penalized Logistic Regression Using SCAD and MCP, A significance test for graph‐constrained estimation, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, A general theory of concave regularization for high-dimensional sparse estimation problems, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Unnamed Item, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, High-dimensional statistical inference via DATE