Square-root lasso: pivotal recovery of sparse signals via conic programming
From MaRDI portal
Publication:3107973
DOI10.1093/biomet/asr043zbMath1228.62083arXiv1009.5689OpenAlexW3121832289MaRDI QIDQ3107973
Alexandre Belloni, Lie Wang, Victor Chernozhukov
Publication date: 28 December 2011
Published in: Biometrika (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1009.5689
Asymptotic properties of parametric estimators (62F12) Linear regression; mixed models (62J05) Applications of mathematical programming (90C90)
Related Items
An efficient semismooth Newton method for adaptive sparse signal recovery problems, Unnamed Item, Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound, The EAS approach for graphical selection consistency in vector autoregression models, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Smooth over-parameterized solvers for non-smooth structured optimization, A dual semismooth Newton based augmented Lagrangian method for large-scale linearly constrained sparse group square-root Lasso problems, Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry, Inference for high‐dimensional linear models with locally stationary error processes, Robust oracle estimation and uncertainty quantification for possibly sparse quantiles, Optimal learning, Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis, High-dimensional inference robust to outliers with ℓ1-norm penalization, Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect, Sparse additive models in high dimensions with wavelets, Unnamed Item, Inference robust to outliers with ℓ1-norm penalization, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Comment on “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, High-dimensional regression with unknown variance, A general theory of concave regularization for high-dimensional sparse estimation problems, Optimal Estimation of Genetic Relatedness in High-Dimensional Linear Models, Robust Wasserstein profile inference and applications to machine learning, Solution paths of variational regularization methods for inverse problems, Unnamed Item, Unnamed Item, Unnamed Item, The Partial Linear Model in High Dimensions, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Worst possible sub-directions in high-dimensional models, Penalized and constrained LAD estimation in fixed and high dimension, Significance testing in non-sparse high-dimensional linear models, A general family of trimmed estimators for robust high-dimensional data analysis, A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Regularization for high-dimensional covariance matrix, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, Post-model-selection inference in linear regression models: an integrated review, On estimation of the diagonal elements of a sparse precision matrix, Joint estimation and variable selection for mean and dispersion in proper dispersion models, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension, Econometric estimation with high-dimensional moment equalities, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Sharp MSE bounds for proximal denoising, High-dimensional regression with potential prior information on variable importance, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Self-normalization: taming a wild population in a heavy-tailed world, The Noise Collector for sparse recovery in high dimensions, High-dimensional tests for functional networks of brain anatomic regions, On the regularized risk of distributionally robust learning over deep neural networks, Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study, Sure independence screening for analyzing supersaturated designs, Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors, Penalised robust estimators for sparse and high-dimensional linear models, Perspective functions: proximal calculus and applications in high-dimensional statistics, An off-the-grid approach to multi-compartment magnetic resonance fingerprinting, Accuracy assessment for high-dimensional linear regression, The \(L_1\) penalized LAD estimator for high dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Sparse identification of posynomial models, Debiasing the debiased Lasso with bootstrap, Correcting for unknown errors in sparse high-dimensional function approximation, Double-estimation-friendly inference for high-dimensional misspecified models, Stein's method for nonlinear statistics: a brief survey and recent progress, A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization, Adapting to unknown noise level in sparse deconvolution, Goodness-of-Fit Tests for High Dimensional Linear Models, Variable selection for sparse logistic regression, Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing, Global-local mixtures: a unifying framework, Sign-constrained least squares estimation for high-dimensional regression, Linear regression with sparsely permuted data, Generalization of constraints for high dimensional regression problems, Noisy low-rank matrix completion with general sampling distribution, Inference on the change point under a high dimensional sparse mean shift, Finite-sample analysis of \(M\)-estimators using self-concordance, A two-stage regularization method for variable selection and forecasting in high-order interaction model, Sparse HP filter: finding kinks in the COVID-19 contact rate, Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, High-dimensional inference in misspecified linear models, Oracle inequalities for high dimensional vector autoregressions, Oracle inequalities for high-dimensional prediction, Improved bounds for square-root Lasso and square-root slope, Unnamed Item, Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework, Adaptive estimation of high-dimensional signal-to-noise ratios, Selective inference with a randomized response, Robust subspace clustering, Pivotal estimation via square-root lasso in nonparametric regression, Adaptive smoothing algorithms for nonsmooth composite convex minimization, Group Inference in High Dimensions with Applications to Hierarchical Testing, Testing Endogeneity with High Dimensional Covariates, On asymptotically optimal confidence regions and tests for high-dimensional models, Greedy variance estimation for the LASSO, Prediction error bounds for linear regression with the TREX, Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements, Simultaneous feature selection and clustering based on square root optimization, Confidence intervals for high-dimensional inverse covariance estimation, Simplex QP-based methods for minimizing a conic quadratic objective over polyhedra, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Optimal sparsity testing in linear regression model, A Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical Models, Lasso meets horseshoe: a survey, A study on tuning parameter selection for the high-dimensional lasso, Tuning-Free Heterogeneity Pursuit in Massive Networks, Iteratively reweighted \(\ell_1\)-penalized robust regression, A permutation approach for selecting the penalty parameter in penalized model selection, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Honest confidence regions and optimality in high-dimensional precision matrix estimation, Gaussian graphical model estimation with false discovery rate control, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, Optimal designs in sparse linear models, Prediction bounds for higher order total variation regularized least squares, Non-Convex Global Minimization and False Discovery Rate Control for the TREX, l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies, An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization, Sharp Oracle Inequalities for Square Root Regularization, Scale calibration for high-dimensional robust regression, Sparse Poisson regression with penalized weighted score function, Nonsparse Learning with Latent Variables, A knockoff filter for high-dimensional selective inference, Group penalized quantile regression, Linear Hypothesis Testing in Dense High-Dimensional Linear Models, An Inexact Augmented Lagrangian Method for Second-Order Cone Programming with Applications, A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks, The sparsity of LASSO-type minimizers