A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY
From MaRDI portal
Publication:3096969
DOI10.1142/S0219530511001893zbMath1267.68185OpenAlexW2046237749WikidataQ60700494 ScholiaQ60700494MaRDI QIDQ3096969
Publication date: 15 November 2011
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530511001893
Learning and adaptive systems in artificial intelligence (68T05) Coding and information theory (compaction, compression, models of communication, encoding schemes, etc.) (aspects in computer science) (68P30)
Cites Work
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
This page was built for publication: A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY