Error analysis for coefficient-based regularized regression in additive models
From MaRDI portal
Publication:1698243
DOI10.1016/J.SPL.2017.10.001zbMath1440.62278OpenAlexW2766126305MaRDI QIDQ1698243
Yanfang Tao, Luoqing Li, Biqin Song
Publication date: 15 February 2018
Published in: Statistics \& Probability Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.spl.2017.10.001
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Unnamed Item
- Consistency of support vector machines using additive kernels for additive models
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Unified approach to coefficient-based regularized regression
- Component selection and smoothing in multivariate nonparametric regression
- Multi-kernel regularized classifiers
- Variable selection in nonparametric additive models
- Learning with sample dependent hypothesis spaces
- Kernel-based sparse regression with the correntropy-induced loss
- Learning theory estimates via integral operators and their approximations
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- Learning Theory
- A new concentration result for regularized risk minimizers
- Sparse Additive Models
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
This page was built for publication: Error analysis for coefficient-based regularized regression in additive models