Sure Independence Screening for Ultrahigh Dimensional Feature Space

From MaRDI portal
Publication:4632602

DOI10.1111/j.1467-9868.2008.00674.xzbMath1411.62187arXivmath/0612857OpenAlexW2154560360WikidataQ42087328 ScholiaQ42087328MaRDI QIDQ4632602

Jinchi Lv, Jianqing Fan

Publication date: 30 April 2019

Published in: Journal of the Royal Statistical Society Series B: Statistical Methodology (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0612857



Related Items

SCAD‐penalized quantile regression for high‐dimensional data analysis and variable selection, A new approach for ultrahigh-dimensional covariance matrix estimation, BOLT-SSI: A Statistical Approach to Screening Interaction Effects for Ultra-High Dimensional Data, Robust Feature Screening via Distance Correlation for Ultrahigh Dimensional Data With Responses Missing at Random, Hybrid Hard-Soft Screening for High-dimensional Latent Class Analysis, Ensemble Subset Regression (ENSURE): Efficient High-dimensional Prediction, Score Tests With Incomplete Covariates and High-Dimensional Auxiliary Variables, Greedy Variable Selection for High-Dimensional Cox Models, Ultra high‐dimensional semiparametric longitudinal data analysis, Screening-assisted dynamic multiple testing with false discovery rate control, An Outer-Product-of-Gradient Approach to Dimension Reduction and its Application to Classification in High Dimensional Space, Threshold Selection in Feature Screening for Error Rate Control, Orthogonalized Kernel Debiased Machine Learning for Multimodal Data Analysis, On polygenic risk scores for complex traits prediction, Feature screening with large‐scale and high‐dimensional survival data, Cross-Trait Prediction Accuracy of Summary Statistics in Genome-Wide Association Studies, Feature Screening with Latent Responses, Ultra-High Dimensional Variable Selection for Doubly Robust Causal Inference, Screening Methods for Linear Errors-in-Variables Models in High Dimensions, Clustering High-Dimensional Data via Feature Selection, A General Framework of Nonparametric Feature Selection in High-Dimensional Data, Integrative sparse reduced-rank regression via orthogonal rotation for analysis of high-dimensional multi-source data, Nonparametric instrument model averaging, Volume under the ROC surface for high-dimensional independent screening with ordinal competing risk outcomes, Quantile forward regression for high-dimensional survival data, A New Model-Free Feature Screening Procedure for Ultrahigh-Dimensional Interval-Censored Failure Time Data, Partial sufficient variable screening with categorical controls, Semiparametric penalized quadratic inference functions for longitudinal data in ultra-high dimensions, Identification of outlying observations for large-dimensional data, One-step sparse estimates in the reverse penalty for high-dimensional correlated data, Feature selection in ultrahigh-dimensional additive models with heterogeneous frequency component functions, Model aggregation for doubly divided data with large size and large dimension, Simultaneous test for linear model via projection, Ultra-High Dimensional Quantile Regression for Longitudinal Data: An Application to Blood Pressure Analysis, Variable Selection Via Thompson Sampling, RaSE: A Variable Screening Framework via Random Subspace Ensembles, A Joint MLE Approach to Large-Scale Structured Latent Attribute Analysis, Scalable and efficient inference via CPE, Feature Screening for Interval-Valued Response with Application to Study Association between Posted Salary and Required Skills, Mapping the Genetic-Imaging-Clinical Pathway with Applications to Alzheimer’s Disease, Testing Mediation Effects Using Logic of Boolean Matrices, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, Block-diagonal precision matrix regularization for ultra-high dimensional data, A new covariate selection strategy for high dimensional data in causal effect estimation with multivariate treatments, Semiparametric model averaging method for survival probability predictions of patients, Distributed smoothed rank regression with heterogeneous errors for massive data, Penalized \(M\)-estimation based on standard error adjusted adaptive elastic-net, Optimal Treatment Regimes: A Review and Empirical Comparison, Forward selection for feature screening and structure identification in varying coefficient models, The Kendall interaction filter for variable interaction screening in high dimensional classification problems, A general framework for penalized mixed-effects multitask learning with applications on DNA methylation surrogate biomarkers creation, An Approximated Collapsed Variational Bayes Approach to Variable Selection in Linear Regression, Scalable Model-Free Feature Screening via Sliced-Wasserstein Dependency, Model‐free conditional screening for ultrahigh‐dimensional survival data via conditional distance correlation, Double penalized regularization estimation for partially linear instrumental variable models with ultrahigh dimensional instrumental variables, Mediation analysis method review of high throughput data, Model-Free Conditional Feature Screening with FDR Control, Risk spillover network structure learning for correlated financial assets: a directed acyclic graph approach, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, A generalized knockoff procedure for FDR control in structural change detection, A post-screening diagnostic study for ultrahigh dimensional data, Optimal Nonparametric Inference with Two-Scale Distributional Nearest Neighbors, Conditional characteristic feature screening for massive imbalanced data, Orthogonality based penalized GMM estimation for variable selection in partially linear spatial autoregressive models, Sparse dimension reduction based on energy and ball statistics, Inference for sparse linear regression based on the leave-one-covariate-out solution path, Derandomizing Knockoffs, Generalized martingale difference divergence: detecting conditional mean independence with applications in variable screening, Explaining classifiers with measures of statistical association, A quadratic upper bound algorithm for regression analysis of credit risk under the proportional hazards model with case-cohort data, Empirical likelihood based tests for detecting the presence of significant predictors in marginal quantile regression, Variable selection for categorical response: a comparative study, A dynamic screening algorithm for hierarchical binary marketing data, Post-selection inference via algorithmic stability, Sufficient variable screening with high-dimensional controls, Structure learning via unstructured kernel-based M-estimation, Measures of Uncertainty for Shrinkage Model Selection, Subgroup analysis using Bernoulli‐gated hierarchical mixtures of experts models, Nonparametric Prediction Distribution from Resolution-Wise Regression with Heterogeneous Data, A Scalable Frequentist Model Averaging Method, Homogeneity and Sparsity Analysis for High-Dimensional Panel Data Models, Estimations and Tests for Generalized Mediation Models with High-Dimensional Potential Mediators, Supervised homogeneity fusion: a combinatorial approach, A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression, High-dimensional local linear regression under sparsity and convex losses, Unnamed Item, Unnamed Item, Unnamed Item, An Updated Literature Review of Distance Correlation and Its Applications to Time Series, A Model-free Variable Screening Method Based on Leverage Score, Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares, Unnamed Item, Unnamed Item, Surrogate-variable-based model-free feature screening for survival data under the general censoring mechanism, Interaction identification and clique screening for classification with ultra-high dimensional discrete features, \(\ell_0\)-regularized high-dimensional accelerated failure time model, Nonparametric feature selection by random forests and deep neural networks, RCV-based error density estimation in the ultrahigh dimensional additive model, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, A general framework for tensor screening through smoothing, On sufficient variable screening using log odds ratio filter, New hard-thresholding rules based on data splitting in high-dimensional imbalanced classification, A new nonparametric screening method for ultrahigh-dimensional survival data, Robust feature screening for ultra-high dimensional right censored data via distance correlation, Fused mean-variance filter for feature screening, Variable selection for high dimensional Gaussian copula regression model: an adaptive hypothesis testing procedure, Sparse pathway-based prediction models for high-throughput molecular data, On the oracle property of a generalized adaptive elastic-net for multivariate linear regression with a diverging number of parameters, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, Adjusted Pearson chi-square feature screening for multi-classification with ultrahigh dimensional data, Conditional screening for ultra-high dimensional covariates with survival outcomes, Model-free conditional independence feature screening for ultrahigh dimensional data, Model-free feature screening for ultrahigh dimensional censored regression, Extended differential geometric LARS for high-dimensional GLMs with general dispersion parameter, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, Optimal directional statistic for general regression, Robust conditional nonparametric independence screening for ultrahigh-dimensional data, A simple model-free survival conditional feature screening, Model selection using mass-nonlocal prior, Covariance-insured screening, Feature screening in ultrahigh-dimensional partially linear models with missing responses at random, Regression adjustment for treatment effect with multicollinearity in high dimensions, Modified SCAD penalty for constrained variable selection problems, Nonconvex penalized ridge estimations for partially linear additive models in ultrahigh dimension, Feature elimination in kernel machines in moderately high dimensions, An RKHS model for variable selection in functional linear regression, Determination of vector error correction models in high dimensions, Portal nodes screening for large scale social networks, Stable feature screening for ultrahigh dimensional data, Variable screening for ultrahigh dimensional heterogeneous data via conditional quantile correlations, Model-free feature screening for ultrahigh-dimensional data conditional on some variables, Variable screening for high dimensional time series, Hypothesis testing sure independence screening for nonparametric regression, Conditional mean and quantile dependence testing in high dimension, Model-free feature screening for high-dimensional survival data, Conditional-quantile screening for ultrahigh-dimensional survival data via martingale difference correlation, Debiasing the Lasso: optimal sample size for Gaussian designs, Measuring and testing for interval quantile dependence, Principles of experimental design for big data analysis, On consistency and sparsity for sliced inverse regression in high dimensions, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, A retail store SKU promotions optimization model for category multi-period profit maximization, Beyond Gaussian approximation: bootstrap for maxima of sums of independent random vectors, Optimal estimation of slope vector in high-dimensional linear transformation models, A method for selecting the relevant dimensions for high-dimensional classification in singular vector spaces, Feature screening for nonparametric and semiparametric models with ultrahigh-dimensional covariates, Fused variable screening for massive imbalanced data, Feature screening for ultrahigh dimensional categorical data with covariates missing at random, A nonparametric feature screening method for ultrahigh-dimensional missing response, Approximate least squares estimation for spatial autoregressive models with covariates, A note on quantile feature screening via distance correlation, Low-dimensional confounder adjustment and high-dimensional penalized estimation for survival analysis, Determining cutoff point of ensemble trees based on sample size in predicting clinical dose with DNA microarray data, A distribution-based Lasso for a general single-index model, Variable selection for partially linear models via Bayesian subset modeling with diffusing prior, Nonparametric independence feature screening for ultrahigh-dimensional survival data, Penalized empirical likelihood for partially linear errors-in-variables panel data models with fixed effects, Feature screening based on distance correlation for ultrahigh-dimensional censored data with covariate measurement error, An efficient algorithm for joint feature screening in ultrahigh-dimensional Cox's model, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, An RKHS-based approach to double-penalized regression in high-dimensional partially linear models, Broken adaptive ridge regression and its asymptotic properties, Factor-adjusted multiple testing of correlations, Feature screening for multi-response varying coefficient models with ultrahigh dimensional predictors, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, Optimal feature selection for sparse linear discriminant analysis and its applications in gene expression data, Change-point detection in multinomial data with a large number of categories, High-dimensional variable selection via low-dimensional adaptive learning, Learning sparse conditional distribution: an efficient kernel-based approach, Multivariate variable selection by means of null-beamforming, Gini correlation for feature screening, Regularization parameter selection for the low rank matrix recovery, Model-free feature screening via distance correlation for ultrahigh dimensional survival data, Advanced topics in sliced inverse regression, Variable selection in functional regression models: a review, Model-robust subdata selection for big data, Feature screening for ultrahigh-dimensional survival data when failure indicators are missing at random, Sure independence screening in the presence of missing data, Stable correlation and robust feature screening, Projection quantile correlation and its use in high-dimensional grouped variable screening, Conditional screening for ultrahigh-dimensional survival data in case-cohort studies, Broken adaptive ridge regression for right-censored survival data, Fast feature selection via streamwise procedure for massive data, VCSEL: prioritizing SNP-set by penalized variance component selection, Distribution-free and model-free multivariate feature screening via multivariate rank distance correlation, Non-marginal feature screening for varying coefficient competing risks model, Unified mean-variance feature screening for ultrahigh-dimensional regression, Interaction screening via canonical correlation, Model-free global likelihood subsampling for massive data, Revisiting feature selection for linear models with FDR and power guarantees, High-dimensional variable screening through kernel-based conditional mean dependence, Asymptotic properties of high-dimensional random forests, Asset selection based on high frequency Sharpe ratio, Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, Nonparametric feature screening, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, Conditional distance correlation screening for sparse ultrahigh-dimensional models, Detecting weak signals in high dimensions, Forward variable selection for sparse ultra-high-dimensional generalized varying coefficient models, An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems, Model-free sure screening via maximum correlation, Fitting sparse linear models under the sufficient and necessary condition for model identification, Maximum-type tests for high-dimensional regression coefficients using Wilcoxon scores, On selecting interacting features from high-dimensional data, Using random subspace method for prediction and variable importance assessment in linear regression, LOL selection in high dimension, Group subset selection for linear regression, A sequential test for variable selection in high dimensional complex data, Censored mean variance sure independence screening for ultrahigh dimensional survival data, A nonparametric empirical Bayes approach to large-scale multivariate regression, Partition-based feature screening for categorical data via RKHS embeddings, Model-free variable selection for conditional mean in regression, Hybrid safe-strong rules for efficient optimization in Lasso-type problems, Group orthogonal greedy algorithm for change-point estimation of multivariate time series, A scalable surrogate \(L_0\) sparse regression method for generalized linear models with applications to large scale data, Network-based feature screening with applications to genome data, Variable selection for survival data with a class of adaptive elastic net techniques, Individual-level social influence identification in social media: a learning-simulation coordinated method, Variable selection in censored quantile regression with high dimensional data, Robust dependence measure for detecting associations in large data set, Smooth sparse coding via marginal regression for learning sparse representations, Random subspace method for high-dimensional regression with the \texttt{R} package \texttt{regRSM}, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Testing a single regression coefficient in high dimensional linear models, A rank-corrected procedure for matrix completion with fixed basis coefficients, Conditional feature screening for mean and variance functions in models with multiple-index structure, Censored cumulative residual independent screening for ultrahigh-dimensional survival data, Demand forecasting with high dimensional data: the case of SKU retail sales forecasting with intra- and inter-category promotional information, Group-wise semiparametric modeling: a SCSE approach, Nonparametric independence screening via favored smoothing bandwidth, Conditional quantile correlation screening procedure for ultrahigh-dimensional varying coefficient models, Asymtotics of Dantzig selector for a general single-index model, Powerful test based on conditional effects for genome-wide screening, PBoostGA: pseudo-boosting genetic algorithm for variable ranking and selection, Test for high-dimensional regression coefficients using refitted cross-validation variance estimation, Screening group variables in the proportional hazards model, Ultrahigh dimensional feature screening via projection, Two-layer EM algorithm for ALD mixture regression models: a new solution to composite quantile regression, Principal components adjusted variable screening, Correlation rank screening for ultrahigh-dimensional survival data, Variable selection using shrinkage priors, Canonical kernel dimension reduction, Model free feature screening for ultrahigh dimensional data with responses missing at random, Feature screening for generalized varying coefficient models with application to dichotomous responses, Adaptive conditional feature screening, Jackknife empirical likelihood test for high-dimensional regression coefficients, High-dimensional multivariate posterior consistency under global-local shrinkage priors, Estimating and testing conditional sums of means in high dimensional multivariate binary data, Variable selection in high-dimensional quantile varying coefficient models, Asymptotics of hierarchical clustering for growing dimension, On the distance concentration awareness of certain data reduction techniques, Goodness-of-fit testing-based selection for large-\(p\)-small-\(n\) problems: a two-stage ranking approach, Regression with outlier shrinkage, Discussion of: ``Grouping strategies and thresholding for high dimension linear models, A selective overview of feature screening for ultrahigh-dimensional data, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Dimension reduction based linear surrogate variable approach for model free variable selection, Robust model-free feature screening via quantile correlation, Testing covariates in high dimension linear regression with latent factors, On efficient calculations for Bayesian variable selection, Semiparametric regression models with additive nonparametric components and high dimensional parametric components, Selection of tuning parameters in bridge regression models via Bayesian information criterion, Bayesian high-dimensional screening via MCMC, SCAD penalized rank regression with a diverging number of parameters, Robust sure independence screening for ultrahigh dimensional non-normal data, Double penalized variable selection procedure for partially linear models with longitudinal data, Independent feature screening for ultrahigh-dimensional models with interactions, On model selection from a finite family of possibly misspecified time series models, Oracle inequalities for high dimensional vector autoregressions, A flexible semiparametric forecasting model for time series, On nonparametric feature filters in electromagnetic imaging, Accelerating a Gibbs sampler for variable selection on genomics data with summarization and variable pre-selection combining an array DBMS and R, Robust feature screening for varying coefficient models via quantile partial correlation, Profile forward regression screening for ultra-high dimensional semiparametric varying coefficient partially linear models, Robust rank screening for ultrahigh dimensional discriminant analysis, Robust \(U\)-type test for high dimensional regression coefficients using refitted cross-validation variance estimation, Sure feature screening for high-dimensional dichotomous classification, Joint adaptive mean-variance regularization and variance stabilization of high dimensional data, Ball Covariance: A Generic Measure of Dependence in Banach Space, Sparse classification with paired covariates, Consistent tuning parameter selection in high dimensional sparse linear regression, Two-directional simultaneous inference for high-dimensional models, Variable selection for partially linear models via partial correlation, Stabilizing Variable Selection and Regression, Selection by partitioning the solution paths, Variable selection after screening: with or without data splitting?, Factor-Adjusted Regularized Model Selection, Restricted fence method for covariate selection in longitudinal data analysis, Empirical likelihood for a varying coefficient partially linear model with diverging number of parameters, Principled sure independence screening for Cox models with ultra-high-dimensional covariates, Independent rule in classification of multivariate binary data, Sparse hierarchical regression with polynomials, Orthogonal one step greedy procedure for heteroscedastic linear models, ARGONAUT: algorithms for global optimization of constrained grey-box computational problems, Penalized empirical likelihood inference for sparse additive hazards regression with a diverging number of covariates, Stable prediction in high-dimensional linear models, Post-model-selection inference in linear regression models: an integrated review, Variable screening for varying coefficient models with ultrahigh-dimensional survival data, Mallows model averaging with effective model size in fragmentary data prediction, Feature screening and FDR control with knockoff features for ultrahigh-dimensional right-censored data, Robust error density estimation in ultrahigh dimensional sparse linear model, On LASSO for predictive regression, Bayesian factor-adjusted sparse regression, High-dimensional causal mediation analysis based on partial linear structural equation models, A data-driven line search rule for support recovery in high-dimensional data analysis, Independence index sufficient variable screening for categorical responses, The backbone method for ultra-high dimensional sparse machine learning, Conditional sure independence screening by conditional marginal empirical likelihood, Bayesian sparse reduced rank multivariate regression, Sure screening by ranking the canonical correlations, Safe feature screening rules for the regularized Huber regression, Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors, Polya tree-based nearest neighborhood regression, The cumulative Kolmogorov filter for model-free screening in ultrahigh dimensional data, Model-free feature screening via a modified composite quantile correlation, Robust change point detection method via adaptive LAD-Lasso, Model-free conditional screening via conditional distance correlation, Model free feature screening with dependent variable in ultrahigh dimensional binary classification, Statistical inference for model parameters in stochastic gradient descent, Penalized empirical likelihood for semiparametric models with a diverging number of parameters, Feature selection for generalized varying coefficient mixed-effect models with application to obesity GWAS, Nonconcave penalized estimation in sparse vector autoregression model, Joint model-free feature screening for ultra-high dimensional semi-competing risks data, Model-free feature screening for ultrahigh dimensional classification, Nonparametric variable selection and its application to additive models, A modified mean-variance feature-screening procedure for ultrahigh-dimensional discriminant analysis, Nonparametric independence screening for ultra-high dimensional generalized varying coefficient models with longitudinal data, Feature screening in ultrahigh-dimensional varying-coefficient Cox model, Projective inference in high-dimensional problems: prediction and feature selection, Feature screening under missing indicator imputation with non-ignorable missing response, Optimal estimation of direction in regression models with large number of parameters, Bayesian variable selection for survival data using inverse moment priors, Uniform joint screening for ultra-high dimensional graphical models, GRID: a variable selection and structure discovery method for high dimensional nonparametric regression, A two-step method for estimating high-dimensional Gaussian graphical models, Sequential feature screening for generalized linear models with sparse ultra-high dimensional data, Block-regularized repeated learning-testing for estimating generalization error, Robust composite weighted quantile screening for ultrahigh dimensional discriminant analysis, Ultra-high dimensional variable screening via Gram-Schmidt orthogonalization, Conditional SIRS for nonparametric and semiparametric models by marginal empirical likelihood, A robust and efficient estimation and variable selection method for partially linear models with large-dimensional covariates, Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator, Sufficient dimension reduction on marginal regression for gaps of recurrent events, Model selection for high-dimensional linear regression with dependent observations, Which bridge estimator is the best for variable selection?, Exact tests via multiple data splitting, Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood, Dynamic tilted current correlation for high dimensional variable screening, Consistent group selection with Bayesian high dimensional modeling, Variable importance assessments and backward variable selection for multi-sample problems, Composite quantile regression for ultra-high dimensional semiparametric model averaging, Robust communication-efficient distributed composite quantile regression and variable selection for massive data, Feature filter for estimating central mean subspace and its sparse solution, A sequential approach to feature selection in high-dimensional additive models, Testing regression coefficients in high-dimensional and sparse settings, The fused Kolmogorov-Smirnov screening for ultra-high dimensional semi-competing risks data, Sparse and efficient estimation for partial spline models with increasing dimension, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Regularized principal components of heritability, Sparse model identification and learning for ultra-high-dimensional additive partially linear models, Forward regression for Cox models with high-dimensional covariates, Screening and selection for quantile regression using an alternative measure of variable importance, Sufficient variable selection using independence measures for continuous response, Robust feature screening for elliptical copula regression model, Consistency of Bayesian linear model selection with a growing number of parameters, Variable selection via adaptive false negative control in linear regression, A note on the one-step estimator for ultrahigh dimensionality, Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach, Sign-based test for mean vector in high-dimensional and sparse settings, A high-dimensional spatial rank test for two-sample location problems, Grouped variable screening for ultra-high dimensional data for linear model, Estimating the rate constant from biosensor data via an adaptive variational Bayesian approach, Test for conditional independence with application to conditional screening, Tolerance intervals from ridge regression in the presence of multicollinearity and high dimension, Ultrahigh dimensional precision matrix estimation via refitted cross validation, GMM and misspecification correction for misspecified models with diverging number of parameters, Feature screening for ultrahigh-dimensional censored data with varying coefficient single-index model, Feature screening for ultrahigh-dimensional additive logistic models, Selective inference via marginal screening for high dimensional classification, Discovering model structure for partially linear models, Error density estimation in high-dimensional sparse linear model, Penalized full likelihood approach to variable selection for Cox's regression model under nested case-control sampling, Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data, Sure independence screening in ultrahigh dimensional generalized additive models, Generalized high-dimensional trace regression via nuclear norm regularization, A two-stage sparse logistic regression for optimal gene selection in high-dimensional microarray data classification, False discovery control for penalized variable selections with high-dimensional covariates, A knockoff filter for high-dimensional selective inference, Subjective Bayesian testing using calibrated prior probabilities, Joint feature screening for ultra-high-dimensional sparse additive hazards model by the sparsity-restricted pseudo-score estimator, Robust sufficient dimension reduction via ball covariance, Variable selection for covariate adjusted regression model, Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions, Stock return predictability: A factor-augmented predictive regression system with shrinkage method, A novel bagging approach for variable ranking and selection via a mixed importance measure, Sure independence screening for real medical Poisson data, Robust feature screening for high-dimensional survival data, A sure independence screening procedure for ultra-high dimensional partially linear additive models, Projection correlation between scalar and vector variables and its use in feature screening with multi-response data, Feature screening of quadratic inference functions for ultrahigh dimensional longitudinal data, Variable selection under multicollinearity using modified log penalty, On Sure Screening with Multiple Responses, Efficient kernel-based variable selection with sparsistency, Consistent Screening Procedures in High-dimensional Binary Classification, Sparse Composite Quantile Regression with Ultra-high Dimensional Heterogeneous Data, A Review on Sliced Inverse Regression, Sufficient Dimension Reduction, and Applications, Data-guided Treatment Recommendation with Feature Scores, F-test and z-test for high-dimensional regression models with a factor structure, Model-free feature screening for ultrahigh dimensional data via a Pearson chi-square based index, Finite-sample results for lasso and stepwise Neyman-orthogonal Poisson estimators, Semi-Standard Partial Covariance Variable Selection When Irrepresentable Conditions Fail, Prior Knowledge Guided Ultra-High Dimensional Variable Screening With Application to Neuroimaging Data, Nonparametric independence feature screening for ultrahigh-dimensional missing data, Model-free survival conditional feature screening, Variance-estimation-free test of significant covariates in high-dimensional regression, Dimension-wise sparse low-rank approximation of a matrix with application to variable selection in high-dimensional integrative analyzes of association, Scalable inference for high-dimensional precision matrix, Variable Selection With Second-Generation P-Values, A nonparametric procedure for linear and nonlinear variable screening, Features Selection as a Nash-Bargaining Solution: Applications in Online Advertising and Information Systems, Nonlinear Variable Selection via Deep Neural Networks, MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression, Least-Square Approximation for a Distributed System, Adjusting systematic bias in high dimensional principal component scores, Modeling gene-covariate interactions in sparse regression with group structure for genome-wide association studies, Copula-based Partial Correlation Screening: a Joint and Robust Approach, A model-free variable selection method for reducing the number of redundant variables, Nonparametric independence screening for ultra-high-dimensional longitudinal data under additive models, Bayesian Neural Networks for Selection of Drug Sensitive Genes, Ultrahigh-dimensional sufficient dimension reduction for censored data with measurement error in covariates, Testing for Neglected Nonlinearity Using Regularized Artificial Neural Networks, Variance ratio screening for ultrahigh dimensional discriminant analysis, Sparsity identification in ultra-high dimensional quantile regression models with longitudinal data, Unnamed Item, Variance estimation for sparse ultra-high dimensional varying coefficient models, Conditional distance correlation sure independence screening for ultra-high dimensional survival data, A new robust model-free feature screening method for ultra-high dimensional right censored data, Non-marginal feature screening for additive hazard model with ultrahigh-dimensional covariates, In defense of LASSO, Regression estimation via information-weighted composite models with different dimensions, Estimation in partial linear model with spline modal function, Feature screening for high-dimensional survival data via censored quantile correlation, Fast stepwise regression based on multidimensional indexes, Modal additive models with data-driven structure identification, Unnamed Item, Unnamed Item, Asymptotics of AIC, BIC and \(C_p\) model selection rules in high-dimensional regression, Tests for high-dimensional single-index models, Unnamed Item, Screening Rules and its Complexity for Active Set Identification, Generalized fiducial factor: an alternative to the Bayes factor for forensic identification of source problems, Unnamed Item, Cluster feature selection in high-dimensional linear models, Evolution of high-frequency systematic trading: a performance-driven gradient boosting model, Facilitating high‐dimensional transparent classification via empirical Bayes variable selection, On the accuracy in high‐dimensional linear models and its application to genomic selection, Feature selection in finite mixture of sparse normal linear models in high-dimensional feature space, A Bayesian approach with generalized ridge estimation for high-dimensional regression and testing, RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs, L2RM: Low-Rank Linear Regression Models for High-Dimensional Matrix Responses, A survey of high dimension low sample size asymptotics, Greedy forward regression for variable screening, Unnamed Item, Unnamed Item, Simultaneous Clustering and Estimation of Heterogeneous Graphical Models, Unnamed Item, Error Variance Estimation in Ultrahigh-Dimensional Additive Models, Multiple Testing of Submatrices of a Precision Matrix With Applications to Identification of Between Pathway Interactions, Bayesian Approaches for Large Biological Networks, Multiple outliers detection in sparse high-dimensional regression, Quantile screening for ultra-high-dimensional heterogeneous data conditional on some variables, Two-sample spatial rank test using projection, Feature screening in ultrahigh-dimensional additive Cox model, Model Selection for High-Dimensional Quadratic Regression via Regularization, On Reject and Refine Options in Multicategory Classification, Semiparametric Ultra-High Dimensional Model Averaging of Nonlinear Dynamic Time Series, Two tales of variable selection for high dimensional regression: Screening and model building, Cross‐validation and peeling strategies for survival bump hunting using recursive peeling methods, Pruning variable selection ensembles, Weighted linear programming discriminant analysis for high‐dimensional binary classification, Robust feature screening for multi-response trans-elliptical regression model with ultrahigh-dimensional covariates, Portfolio Optimization with Ambiguous Correlation and Stochastic Volatilities, Particle swarm stepwise (PaSS) algorithm for information criteria-based variable selections, A fast adaptive Lasso for the cox regression via safe screening rules, Grouped feature screening for ultra-high dimensional data for the classification model, A penalized estimation for the Cox model with ordinal multinomial covariates, MM Algorithms for Variance Components Models, A model-free conditional screening approach via sufficient dimension reduction, Gap Safe screening rules for sparsity enforcing penalties, Supervised t-Distributed Stochastic Neighbor Embedding for Data Visualization and Classification, Stability Selection, Modified martingale difference correlations, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching, Statistical inference for nonignorable missing-data problems: a selective review, Variable screening with missing covariates: a discussion of ‘statistical inference for nonignorable missing data problems: a selective review’ by Niansheng Tang and Yuanyuan Ju, Group screening for ultra-high-dimensional feature under linear model, A selective overview of sparse sufficient dimension reduction, A review of distributed statistical inference, Variable screening in multivariate linear regression with high-dimensional covariates, Model averaging for generalized linear models in fragmentary data prediction, Achieving the oracle property of OEM with nonconvex penalties, Sequential profile Lasso for ultra-high-dimensional partially linear models, AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories, Markov Neighborhood Regression for High-Dimensional Inference, Covariate Information Number for Feature Screening in Ultrahigh-Dimensional Supervised Problems, Integrating Multisource Block-Wise Missing Data in Model Selection, Model Selection via the VC-Dimension, Partial correlation screening for varying coefficient models, Efficient Signal Inclusion With Genomic Applications, Fence methods for backcross experiments, Model averaging with privacy-preserving, Group feature screening via the F statistic, A method for analyzing supersaturated designs inspired by control charts, A Generic Sure Independence Screening Procedure, Adaptive hybrid screening for efficient lasso optimization, Bayesian bridge quantile regression, Sure independence screening for analyzing supersaturated designs, A multiple-case deletion approach for detecting influential points in high-dimensional regression, Conditional Test for Ultrahigh Dimensional Linear Regression Coefficients, Asymmetric influence measure for high dimensional regression, Asymptotic Theory of \(\boldsymbol \ell _1\) -Regularized PDE Identification from a Single Noisy Trajectory, Model Selection of Generalized Estimating Equation With Divergent Model Size, Statistical Inference for High-Dimensional Models via Recursive Online-Score Estimation, High-Dimensional Precision Medicine From Patient-Derived Xenografts, Forward variable selection for ultra-high dimensional quantile regression models, Model-free, monotone invariant and computationally efficient feature screening with data-adaptive threshold, Robust sure independence screening for nonpolynomial dimensional generalized linear models, A new reproducing kernel‐based nonlinear dimension reduction method for survival data, Feature screening for multiple responses, Compositional knockoff filter for high‐dimensional regression analysis of microbiome data, Using the “Hidden” genome to improve classification of cancer types, Two‐stage penalized regression screening to detect biomarker–treatment interactions in randomized clinical trials, Jackknife model averaging for high‐dimensional quantile regression, Cellwise outlier detection with false discovery rate control, Sure joint feature screening in nonparametric transformation model for right censored data, Statistical inference on the significance of rows and columns for matrix-valued data in an additive model, Fast robust feature screening for ultrahigh-dimensional varying coefficient models, A new nonparametric test for high-dimensional regression coefficients, A model-free feature screening approach based on kernel density estimation, Fast Bayesian variable screenings for binary response regressions with small sample size, Variable screening for ultrahigh dimensional censored quantile regression, Model selection in high-dimensional noisy data: a simulation study, Robust model-free feature screening for ultrahigh dimensional surrogate data, Ultrahigh dimensional feature screening for additive model with multivariate response, Robust feature screening procedures for single and mixed types of data, A consistent variable screening procedure with family-wise error control, Nonlinear Factor‐Augmented Predictive Regression Models with Functional Coefficients, Model-Free Feature Screening and FDR Control With Knockoff Features, Model-Free Forward Screening Via Cumulative Divergence, Unnamed Item, Variable selection in the high-dimensional continuous generalized linear model with current status data, Independent screening in high-dimensional exponential family predictors’ space, A Simple Two-Sample Test in High Dimensions Based on L2-Norm, Block-Regularized m × 2 Cross-Validated Estimator of the Generalization Error, Generalized Regression Estimators with High-Dimensional Covariates, The Lq-norm learning for ultrahigh-dimensional survival data: an integrative framework, Ranking-Based Variable Selection for high-dimensional data, TESTING CONSTANCY OF CONDITIONAL VARIANCE IN HIGH DIMENSION, Quantile-adaptive variable screening in ultra-high dimensional varying coefficient models, A robust variable screening method for high-dimensional data, IPAD: Stable Interpretable Forecasting with Knockoffs Inference, Targeted Random Projection for Prediction From High-Dimensional Features, Learning Moral Graphs in Construction of High-Dimensional Bayesian Networks for Mixed Data, Likelihood Ratio Test in Multivariate Linear Regression: from Low to High Dimension, Feature Screening for Network Autoregression Model, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Inference for biased models: a quasi-instrumental variable approach, Operator-induced structural variable selection for identifying materials genes, Composite Coefficient of Determination and Its Application in Ultrahigh Dimensional Variable Screening, Global sensitivity analysis with dependence measures, Variable selection in regression using maximal correlation and distance correlation, A stepwise regression algorithm for high-dimensional variable selection, An iterative approach to distance correlation-based sure independence screening, Information-Based Optimal Subdata Selection for Big Data Linear Regression, Category-Adaptive Variable Screening for Ultra-High Dimensional Heterogeneous Categorical Data, Multiple Influential Point Detection in High Dimensional Regression Spaces, Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Intentional Control of Type I Error Over Unconscious Data Distortion: A Neyman–Pearson Approach to Text Classification, Unnamed Item, Variance estimation based on blocked 3×2 cross-validation in high-dimensional linear regression, Model-free slice screening for ultrahigh-dimensional survival data, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Lassoing the Determinants of Retirement, On correlation rank screening for ultra-high dimensional competing risks data, Model free feature screening for ultrahigh dimensional covariates with right censored outcomes, Optimal model averaging for divergent-dimensional Poisson regressions, Asymptotic Properties of Marginal Least-Square Estimator for Ultrahigh-Dimensional Linear Regression Models with Correlated Errors, A Dimension Reduction Technique for Large-Scale Structured Sparse Optimization Problems with Application to Convex Clustering



Cites Work