High-dimensional local linear regression under sparsity and convex losses
From MaRDI portal
Publication:6200896
DOI10.1214/24-ejs2216OpenAlexW4392156156MaRDI QIDQ6200896
Stephen M. S. Lee, Kin Yap Cheung
Publication date: 25 March 2024
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-18/issue-1/Highdimensional-local-linear-regression-under-sparsity-and-convex-losses/10.1214/24-EJS2216.full
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning sparse gradients for variable selection and dimension reduction
- Component selection and smoothing in multivariate nonparametric regression
- Bias-corrected inference for multivariate nonparametric regression: model selection and oracle property
- Variable selection in nonparametric additive models
- High-dimensional additive modeling
- A Bennett concentration inequality and its application to suprema of empirical processes
- Weak convergence and empirical processes. With applications to statistics
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- GRID: a variable selection and structure discovery method for high dimensional nonparametric regression
- Minimax-optimal nonparametric regression in high dimensions
- High-dimensional generalized linear models and the lasso
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Rodeo: Sparse, greedy nonparametric regression
- Nonparametric sparsity and regularization
- Surface estimation, variable selection, and the nonparametric oracle property
- Nonparametric Independence Screening in Sparse Ultra-High-Dimensional Additive Models
- Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Sparse Additive Models
- Feature Screening via Distance Correlation Learning
- Local Polynomial Kernel Regression for Generalized Linear Models and Quasi-Likelihood Functions
- Variable Selection in Nonparametric Classification Via Measurement Error Model Selection Likelihoods
- L 1-Regularization Path Algorithm for Generalized Linear Models
- Variable Selection Using Adaptive Nonlinear Interaction Structures in High Dimensions
- Automatic structure recovery for additive models
- Robust Estimation of a Location Parameter
- New concentration inequalities in product spaces
This page was built for publication: High-dimensional local linear regression under sparsity and convex losses