Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension
From MaRDI portal
Publication:5242475
DOI10.1080/01621459.2018.1497498zbMath1428.62237OpenAlexW2884101506MaRDI QIDQ5242475
Wei Qian, R. Dennis Cook, Shanshan Ding
Publication date: 12 November 2019
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/01621459.2018.1497498
sparsitysliced average variance estimationinverse regressionsliced inverse regressioncentral subspaceprincipal fitted component
Factor analysis and principal components; correspondence analysis (62H25) Estimation in multivariate analysis (62H12) Nonparametric estimation (62G05)
Related Items
A selective overview of sparse sufficient dimension reduction, A Minimum Discrepancy Approach With Fourier Transform in Sufficient Dimension Reduction, Sparse Sliced Inverse Regression via Cholesky Matrix Penalization, Model averaging assisted sufficient dimension reduction, Ultrahigh-dimensional sufficient dimension reduction for censored data with measurement error in covariates, Central subspaces review: methods and applications, Unnamed Item, A structured covariance ensemble for sufficient dimension reduction, An Outer-Product-of-Gradient Approach to Dimension Reduction and its Application to Classification in High Dimensional Space, Likelihood-based surrogate dimension reduction, Double-slicing assisted sufficient dimension reduction for high-dimensional censored data, An ensemble of inverse moment estimators for sufficient dimension reduction, Fourier transform sparse inverse regression estimators for sufficient variable selection, Sliced inverse regression for integrative multi-omics data analysis
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Model-Free Feature Screening for Ultrahigh-Dimensional Data
- Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow
- Nearly unbiased variable selection under minimax concave penalty
- Estimating sufficient reductions of the predictors in abundant high-dimensional regressions
- Sliced Regression for Dimension Reduction
- The Adaptive Lasso and Its Oracle Properties
- Fisher lecture: Dimension reduction in regression
- Sufficient dimension reduction based on an ensemble of minimum average variance estimators
- Tensor sliced inverse regression
- On marginal sliced inverse regression for ultrahigh dimensional model-free feature selection
- Statistics for high-dimensional data. Methods, theory and applications.
- Sparse linear discriminant analysis by thresholding for high dimensional data
- Coordinate-independent sparse sufficient dimension reduction and variable selection
- Consistent group selection in high-dimensional linear regression
- On almost linearity of low dimensional projections from high dimensional data
- The MM alternative to EM
- Principal fitted components for dimension reduction in regression
- A coordinate gradient descent method for nonsmooth separable minimization
- Covariance regularization by thresholding
- Operator norm consistent estimation of large-dimensional sparse covariance matrices
- Sparsistency and rates of convergence in large covariance matrix estimation
- Estimating a sparse reduction for general regression in high dimensions
- On consistency and sparsity for sliced inverse regression in high dimensions
- Testing predictor contributions in sufficient dimension reduction.
- On dimension folding of matrix- or array-valued statistical objects
- Simultaneous analysis of Lasso and Dantzig selector
- Regularized estimation of large covariance matrices
- Positive definite estimators of large covariance matrices
- Asymptotic properties of sufficient dimension reduction with a diverging number of predictors
- On Directional Regression for Dimension Reduction
- Sliced Inverse Regression for Dimension Reduction
- On the Interpretation of Regression Plots
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Feature Screening via Distance Correlation Learning
- Positive-Definite ℓ1-Penalized Estimation of Large Covariance Matrices
- Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection
- A Semiparametric Approach to Dimension Reduction
- Generalized Thresholding of Large Covariance Matrices
- Sequential Sufficient Dimension Reduction for Large p, Small n Problems
- Dimension reduction and predictor selection in semiparametric models
- Dimension folding PCA and PFC for matrix-valued predictors
- Model Selection and Estimation in Regression with Grouped Variables
- Sparse sufficient dimension reduction
- Sliced Inverse Regression with Regularizations
- Sufficient Dimension Reduction via Inverse Regression
- On Sliced Inverse Regression With High-Dimensional Covariates
- Comment