On consistency and sparsity for sliced inverse regression in high dimensions
From MaRDI portal
Publication:1750280
DOI10.1214/17-AOS1561zbMath1395.62196arXiv1507.03895OpenAlexW2963178286MaRDI QIDQ1750280
Qian Lin, Jun S. Liu, Zhigen Zhao
Publication date: 18 May 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1507.03895
Factor analysis and principal components; correspondence analysis (62H25) Asymptotic properties of nonparametric inference (62G20)
Related Items
A selective overview of sparse sufficient dimension reduction ⋮ Online sparse sliced inverse regression for high-dimensional streaming data ⋮ Sparse Sliced Inverse Regression via Cholesky Matrix Penalization ⋮ Distributed Sufficient Dimension Reduction for Heterogeneous Massive Data ⋮ Data-guided Treatment Recommendation with Feature Scores ⋮ High-dimensional sufficient dimension reduction through principal projections ⋮ Dimension Reduction Forests: Local Variable Importance Using Structured Random Forests ⋮ Sparse sufficient dimension reduction with heteroscedasticity ⋮ Unnamed Item ⋮ Rates of convergence in conditional covariance matrix with nonparametric entries estimation ⋮ Unnamed Item ⋮ On the optimality of sliced inverse regression in high dimensions ⋮ Convergence guarantee for the sparse monotone single index model ⋮ Central subspaces review: methods and applications ⋮ Sparse SIR: optimal rates and adaptive estimation ⋮ Unnamed Item ⋮ Sufficient dimension reduction for clustered data via finite mixture modelling ⋮ An Outer-Product-of-Gradient Approach to Dimension Reduction and its Application to Classification in High Dimensional Space ⋮ Simultaneous estimation for semi-parametric multi-index models ⋮ Misspecified nonconvex statistical optimization for sparse phase retrieval ⋮ Projection divergence in the reproducing kernel Hilbert space: asymptotic normality, block-wise and slicing estimation, and computational efficiency ⋮ Double-slicing assisted sufficient dimension reduction for high-dimensional censored data ⋮ Sliced Independence Test ⋮ Graph informed sliced inverse regression ⋮ On consistency and sparsity for sliced inverse regression in high dimensions ⋮ Slice inverse regression with score functions ⋮ Optimal estimation of slope vector in high-dimensional linear transformation models ⋮ Interpretable sparse SIR for functional data ⋮ High-dimensional index volatility models via Stein's identity ⋮ Sparse Sliced Inverse Regression Via Lasso ⋮ Robust Variable and Interaction Selection for Logistic Regression and General Index Models ⋮ Dimension reduction for block-missing data based on sparse sliced inverse regression ⋮ Fourier transform sparse inverse regression estimators for sufficient variable selection ⋮ Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension
Cites Work
- Unnamed Item
- Unnamed Item
- Measuring and testing dependence by correlation of distances
- Model-Free Feature Screening for Ultrahigh-Dimensional Data
- Estimating sufficient reductions of the predictors in abundant high-dimensional regressions
- Variable selection for general index models via sliced inverse regression
- Signed support recovery for single index models in high-dimensions
- Optimal rates of convergence for covariance matrix estimation
- Covariance regularization by thresholding
- An asymptotic theory for sliced inverse regression
- On consistency and sparsity for sliced inverse regression in high dimensions
- Asymptotics for kernel estimate of sliced inverse regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Sliced Inverse Regression for Dimension Reduction
- Graphics for Regressions With a Binary Response
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Controlling Variable Selection by the Addition of Pseudovariables
- Regularization and Variable Selection Via the Elastic Net
- Model-Free Feature Screening for Ultrahigh Dimensional Discriminant Analysis
- Dimension reduction and predictor selection in semiparametric models
- Sparse sufficient dimension reduction
- Correlation Pursuit: Forward Stepwise Variable Selection for Index Models
- On Sliced Inverse Regression With High-Dimensional Covariates