Rademacher Chaos Complexities for Learning the Kernel Problem
From MaRDI portal
Publication:3057230
DOI10.1162/NECO_a_00028zbMath1208.68190DBLPjournals/neco/YingC10WikidataQ51665656 ScholiaQ51665656MaRDI QIDQ3057230
Publication date: 24 November 2010
Published in: Neural Computation (Search for Journal in Brave)
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (6)
On the optimal estimation of probability measures in weak and strong topologies ⋮ Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection ⋮ Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ U-Processes and Preference Learning ⋮ Modeling interactive components by coordinate kernel polynomial models ⋮ Generalization bounds for metric and similarity learning
Cites Work
- Limit theorems for \(U\)-processes
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- Learning and approximation by Gaussians on Riemannian manifolds
- Fast rates for support vector machines using Gaussian kernels
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal rates for the regularized least-squares algorithm
- Ranking and empirical minimization of \(U\)-statistics
- Local Rademacher complexities
- Learning Theory
- Scale-sensitive dimensions, uniform convergence, and learnability
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Shannon sampling and function reconstruction from point values
- 10.1162/153244303321897690
- Neural Network Learning
- Learning Bounds for Support Vector Machines with Learned Kernels
- Convexity, Classification, and Risk Bounds
- Choosing multiple parameters for support vector machines
This page was built for publication: Rademacher Chaos Complexities for Learning the Kernel Problem