The learning rate of \(l_2\)-coefficient regularized classification with strong loss
From MaRDI portal
Publication:383667
DOI10.1007/s10114-013-0175-yzbMath1277.68099OpenAlexW2072060423MaRDI QIDQ383667
Bao Huai Sheng, Dao-Hong Xiang
Publication date: 5 December 2013
Published in: Acta Mathematica Sinica. English Series (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10114-013-0175-y
Computational learning theory (68Q32) Minimax procedures in statistical decision theory (62C20) Convex programming (90C25)
Related Items (4)
Learning rates of kernel-based robust classification ⋮ A comprehensive survey on particle swarm optimization algorithm and its applications ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Least square regression with indefinite kernels and coefficient regularization
- Sparsity in penalized empirical risk minimization
- Multi-kernel regularized classifiers
- Estimation of the misclassification error for multicategory support vector machine classification
- Learning rates for regularized classifiers using multivariate polynomial kernels
- Elastic-net regularization in learning theory
- The covering number in learning theory
- Learning rates for least square regressions with coefficient regularization
- ERM learning with unbounded sampling
- Minimax optimal rates of convergence for multicategory classifications
- On the rate of convergence for multi-category classification based on convex losses
- The covering number for some Mercer kernel Hilbert spaces
- The consistency of multicategory support vector machines
- Approximation with polynomial kernels and SVM classifiers
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Capacity of reproducing kernel spaces in learning theory
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Shannon sampling and function reconstruction from point values
- 10.1162/153244303321897690
- 10.1162/1532443041827925
- Are Loss Functions All the Same?
This page was built for publication: The learning rate of \(l_2\)-coefficient regularized classification with strong loss