Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
From MaRDI portal
Publication:547325
DOI10.1007/s11425-010-4043-2zbMath1215.68203OpenAlexW1993410613MaRDI QIDQ547325
Publication date: 1 July 2011
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11425-010-4043-2
Sobolev spacereproducing kernel Hilbert spacebinary classificationgeneral convex lossTsybakov noise condition
Related Items (6)
Learning rates of kernel-based robust classification ⋮ The convergence rates of Shannon sampling learning algorithms ⋮ Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Comparison theorems on large-margin learning
Cites Work
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Learning Theory
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Convexity, Classification, and Risk Bounds
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions