Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Tuning Parameter Selection in High Dimensional Penalized Likelihood - MaRDI portal

Tuning Parameter Selection in High Dimensional Penalized Likelihood

From MaRDI portal
Publication:5743163

DOI10.1111/rssb.12001zbMath1411.62216arXiv1605.03321OpenAlexW3103324688MaRDI QIDQ5743163

Cheng Yong Tang, Yingying Fan

Publication date: 9 May 2019

Published in: Journal of the Royal Statistical Society Series B: Statistical Methodology (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1605.03321



Related Items

Achieving the oracle property of OEM with nonconvex penalties, Detecting weak signals in high dimensions, Modeling association between multivariate correlated outcomes and high-dimensional sparse covariates: the adaptive SVS method, Using penalized EM algorithm to infer learning trajectories in latent transition CDM, Variable selection under multicollinearity using modified log penalty, Stability enhanced variable selection for a semiparametric model with flexible missingness mechanism and its application to the ChAMP study, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Robust estimation and outlier detection for varying-coefficient models via penalized regression, Tuning-free ridge estimators for high-dimensional generalized linear models, Variable Selection With Second-Generation P-Values, Penalized quasi-likelihood estimation of generalized Pareto regression -- consistent identification of risk factors for extreme losses, Random subspace method for high-dimensional regression with the \texttt{R} package \texttt{regRSM}, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, Analysis of overfitting in the regularized Cox model, A Dirichlet process functional approach to heteroscedastic-consistent covariance estimation, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, Unnamed Item, High-dimensional \(A\)-learning for optimal dynamic treatment regimes, Hierarchical correction of \(p\)-values via an ultrametric tree running Ornstein-Uhlenbeck process, Tuning Parameter Selection in the LASSO with Unspecified Propensity, Identifying Latent Structures in Restricted Latent Class Models, Homogeneity detection for the high-dimensional generalized linear model, The use of random-effect models for high-dimensional variable selection problems, On the sign consistency of the Lasso for the high-dimensional Cox model, Concordance and value information criteria for optimal treatment decision, In defense of LASSO, Marginal maximum likelihood estimation methods for the tuning parameters of ridge, power ridge, and generalized ridge regression, AIC for the non-concave penalized likelihood method, Regularized latent class analysis with application in cognitive diagnosis, Dependence modelling in ultra high dimensions with vine copulas and the graphical Lasso, Asymptotics of AIC, BIC and \(C_p\) model selection rules in high-dimensional regression, Globaltest confidence regions and their application to ridge regression, Estimation in multivariate linear mixed models for longitudinal data with multiple outputs: Application to PBCseq data analysis, Globally Adaptive Longitudinal Quantile Regression With High Dimensional Compositional Covariates, Variables selection using \(\mathcal{L}_0\) penalty, Variable selection in linear-circular regression models, Predictive quantile regression with mixed roots and increasing dimensions: the ALQR approach, A modified information criterion for tuning parameter selection in 1d fused LASSO for inference on multiple change points, Selection of fixed effects in high-dimensional generalized linear mixed models, Consistent tuning parameter selection in high-dimensional group-penalized regression, Unnamed Item, High-Dimensional Censored Regression via the Penalized Tobit Likelihood, Low-Rank Regression Models for Multiple Binary Responses and their Applications to Cancer Cell-Line Encyclopedia Data, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, A tensor-EM method for large-scale latent class analysis with binary responses, Information criteria for latent factor models: a study on factor pervasiveness and adaptivity, Stability Approach to Regularization Selection for Reduced-Rank Regression, Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, Variable selection approach for zero-inflated count data via adaptive lasso, Sparse alternatives to ridge regression: a random effects approach, Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Cross-Validation With Confidence, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Variable screening for high dimensional time series, Pairwise fusion approach incorporating prior constraint information, Smooth predictive model fitting in regression, High-dimensional mean estimation via \(\ell_1\) penalized normal likelihood, Model selection in sparse high-dimensional vine copula models with an application to portfolio risk, Efficient regularized regression with \(L_0\) penalty for variable selection and network construction, Forward-Backward Selection with Early Dropping, Selection by partitioning the solution paths, Log-Contrast Regression with Functional Compositional Predictors: Linking Preterm Infant's Gut Microbiome Trajectories to Neurobehavioral Outcome, A study on tuning parameter selection for the high-dimensional lasso, Model Selection for High-Dimensional Quadratic Regression via Regularization, High-dimensional variable selection via low-dimensional adaptive learning, Clustering of subsample means based on pairwise L1 regularized empirical likelihood, Kernel density regression, Group variable selection in the Andersen-Gill model for recurrent event data, Tuning parameter selection for penalised empirical likelihood with a diverging number of parameters, Representing Sparse Gaussian DAGs as Sparse R-Vines Allowing for Non-Gaussian Dependence, Tuning parameter calibration for \(\ell_1\)-regularized logistic regression, Testing random effects in linear mixed models: another look at the F‐test (with discussion), A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Linear hypothesis testing for high dimensional generalized linear models, An Efficient Linearly Convergent Regularized Proximal Point Algorithm for Fused Multiple Graphical Lasso Problems, Sparse spatially clustered coefficient model via adaptive regularization, Unnamed Item, Nonparametric homogeneity pursuit in functional-coefficient models