Learning rates for partially linear support vector machine in high dimensions
From MaRDI portal
Publication:5856267
DOI10.1142/S0219530520400126zbMath1462.68165arXiv2006.03288OpenAlexW3081602296MaRDI QIDQ5856267
Yifan Xia, Yongchao Hou, Shao-Gao Lv
Publication date: 25 March 2021
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.03288
Ridge regression; shrinkage estimators (Lasso) (62J07) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Statistics for high-dimensional data. Methods, theory and applications.
- Sparsity in multiple kernel learning
- Multi-kernel regularized classifiers
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Universality of deep convolutional neural networks
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Empirical minimization
- Local Rademacher complexities
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Error bounds for learning the kernel
- Support Vector Machines
- Deep distributed convolutional neural networks: Universality
- The Partial Linear Model in High Dimensions
- Variable Selection for Support Vector Machines in Moderately High Dimensions
This page was built for publication: Learning rates for partially linear support vector machine in high dimensions