Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
From MaRDI portal
Publication:5743269
DOI10.1111/rssb.12026zbMath1411.62196arXiv1110.2563OpenAlexW2069119359MaRDI QIDQ5743269
No author found.
Publication date: 9 May 2019
Published in: Journal of the Royal Statistical Society Series B: Statistical Methodology (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1110.2563
Related Items (only showing first 100 items - show all)
Doubly robust tests of exposure effects under high‐dimensional confounding ⋮ Statistical inference for Cox proportional hazards models with a diverging number of covariates ⋮ A new test for high‐dimensional regression coefficients in partially linear models ⋮ Inducement of population sparsity ⋮ Assessing mediating effects of high‐dimensional microbiome measurements in dietary intervention studies ⋮ Inference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependence ⋮ Score Tests With Incomplete Covariates and High-Dimensional Auxiliary Variables ⋮ Variable selection and debiased estimation for single‐index expectile model ⋮ Poststratification fusion learning in longitudinal data analysis ⋮ A weak‐signal‐assisted procedure for variable selection and statistical inference with an informative subsample ⋮ Assessing the Most Vulnerable Subgroup to Type II Diabetes Associated with Statin Usage: Evidence from Electronic Health Record Data ⋮ A Scale-Free Approach for False Discovery Rate Control in Generalized Linear Models ⋮ Discussion of “A Scale-Free Approach for False Discovery Rate Control in Generalized Linear Models” by Dai, Lin, Xing, and Liu ⋮ Discussion of “A Scale-Free Approach for False Discovery Rate Control in Generalized Linear Models” ⋮ Orthogonalized Kernel Debiased Machine Learning for Multimodal Data Analysis ⋮ Sparse Topic Modeling: Computational Efficiency, Near-Optimal Algorithms, and Statistical Inference ⋮ CEDAR: Communication Efficient Distributed Analysis for Regressions ⋮ Debiased lasso for generalized linear models with a diverging number of covariates ⋮ Penalized Regression for Multiple Types of Many Features With Missing Data ⋮ Weak Signal Identification and Inference in Penalized Likelihood Models for Categorical Responses ⋮ Deconfounding and Causal Regularisation for Stability and External Validity ⋮ POST-SELECTION INFERENCE IN THREE-DIMENSIONAL PANEL DATA ⋮ Double bias correction for high-dimensional sparse additive hazards regression with covariate measurement errors ⋮ Simultaneous test for linear model via projection ⋮ Controlling False Discovery Rate Using Gaussian Mirrors ⋮ Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension ⋮ Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage ⋮ Scalable and efficient inference via CPE ⋮ Debiased machine learning of set-identified linear models ⋮ Statistical Inference for High-Dimensional Generalized Linear Models With Binary Outcomes ⋮ Kernel Ordinary Differential Equations ⋮ Inference for High-Dimensional Linear Mixed-Effects Models: A Quasi-Likelihood Approach ⋮ Testing Mediation Effects Using Logic of Boolean Matrices ⋮ Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data ⋮ Integrative Factor Regression and Its Inference for Multimodal Data Analysis ⋮ Tuning parameter selection for penalized estimation via \(R^2\) ⋮ Debiasing convex regularized estimators and interval estimation in linear models ⋮ High-dimensional robust inference for censored linear models ⋮ Directed graphs and variable selection in large vector autoregressive models ⋮ Generalized matrix decomposition regression: estimation and inference for two-way structured data ⋮ Debiased Lasso for stratified Cox models with application to the national kidney transplant data ⋮ Robust inference for high‐dimensional single index models ⋮ Online inference in high-dimensional generalized linear models with streaming data ⋮ False Discovery Rate Control via Data Splitting ⋮ Distributionally robust and generalizable inference ⋮ Inference for high‐dimensional linear models with locally stationary error processes ⋮ Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design ⋮ Retire: robust expectile regression in high dimensions ⋮ Inference on the best policies with many covariates ⋮ Reprint: Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic ⋮ Optimal decorrelated score subsampling for generalized linear models with massive data ⋮ Statistical inference for linear mediation models with high-dimensional mediators and application to studying stock reaction to COVID-19 pandemic ⋮ Inference for sparse linear regression based on the leave-one-covariate-out solution path ⋮ Inference for High-Dimensional Censored Quantile Regression ⋮ Online Debiasing for Adaptively Collected High-Dimensional Data With Applications to Time Series Analysis ⋮ High-dimensional inference robust to outliers with ℓ1-norm penalization ⋮ Neighborhood-based cross fitting approach to treatment effects with high-dimensional data ⋮ On lower bounds for the bias-variance trade-off ⋮ Universality of regularized regression estimators in high dimensions ⋮ The Lasso with general Gaussian designs with applications to hypothesis testing ⋮ Communication‐efficient low‐dimensional parameter estimation and inference for high‐dimensional Lp$$ {L}^p $$‐quantile regression ⋮ A penalised bootstrap estimation procedure for the explained Gini coefficient ⋮ Estimation and inference in sparse multivariate regression and conditional Gaussian graphical models under an unbalanced distributed setting ⋮ Discussion ⋮ Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares ⋮ LIC criterion for optimal subset selection in distributed interval estimation ⋮ Unnamed Item ⋮ Worst possible sub-directions in high-dimensional models ⋮ Covariate-adjusted inference for differential analysis of high-dimensional networks ⋮ Statistical inference in sparse high-dimensional additive models ⋮ Lasso-driven inference in time and space ⋮ Significance testing in non-sparse high-dimensional linear models ⋮ Variable selection in high-dimensional linear model with possibly asymmetric errors ⋮ Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting) ⋮ Testability of high-dimensional linear models with nonsparse structures ⋮ Inference for low-rank tensors -- no need to debias ⋮ Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors ⋮ Confidence intervals for high-dimensional partially linear single-index models ⋮ Recent advances in statistical methodologies in evaluating program for high-dimensional data ⋮ On the post selection inference constant under restricted isometry properties ⋮ A unified theory of confidence regions and testing for high-dimensional estimating equations ⋮ Constructing confidence intervals for the signals in sparse phase retrieval ⋮ High-dimensional sufficient dimension reduction through principal projections ⋮ De-biasing the Lasso with degrees-of-freedom adjustment ⋮ Doubly robust semiparametric inference using regularized calibrated estimation with high-dimensional data ⋮ The asymptotic distribution of the MLE in high-dimensional logistic models: arbitrary covariance ⋮ Post-model-selection inference in linear regression models: an integrated review ⋮ Thresholding tests based on affine Lasso to achieve non-asymptotic nominal level and high power under sparse and dense alternatives in high dimension ⋮ Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models ⋮ Testing a single regression coefficient in high dimensional linear models ⋮ Doubly debiased Lasso: high-dimensional inference under hidden confounding ⋮ Ridge regression revisited: debiasing, thresholding and bootstrap ⋮ Kernel-penalized regression for analysis of microbiome data ⋮ Distributed testing and estimation under sparse high dimensional models ⋮ Hierarchical correction of \(p\)-values via an ultrametric tree running Ornstein-Uhlenbeck process ⋮ Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments ⋮ Single-index composite quantile regression for ultra-high-dimensional data ⋮ High-dimensional inference for personalized treatment decision ⋮ A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations ⋮ Gene set priorization guided by regulatory networks with p-values through kernel mixed model
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Statistics for high-dimensional data. Methods, theory and applications.
- The Dantzig selector and sparsity oracle inequalities
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Can one estimate the conditional distribution of post-model-selection estimators?
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Nonconcave penalized likelihood with a diverging number of parameters.
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Regularized estimation of large covariance matrices
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Adaptive Confidence Intervals for the Test Error in Classification
- Scaled sparse linear regression
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- Decoding by Linear Programming
- Just relax: convex programming methods for identifying sparse signals in noise
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Stability Selection
- A Statistical View of Some Chemometrics Regression Tools
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- Smoothly Clipped Absolute Deviation on High Dimensions
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Comments on: \(\ell _{1}\)-penalization for mixture regression models
This page was built for publication: Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models