Leave-One-Out Bounds for Kernel Methods
From MaRDI portal
Publication:4816853
DOI10.1162/089976603321780326zbMath1085.68144OpenAlexW2044421224MaRDI QIDQ4816853
Publication date: 14 September 2004
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/089976603321780326
Related Items (31)
Coefficient regularized regression with non-iid sampling ⋮ Consistency and generalization bounds for maximum entropy density estimation ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ ERM learning algorithm for multi-class classification ⋮ Hermite learning with gradient data ⋮ Regularized least square regression with dependent samples ⋮ Unnamed Item ⋮ Learning with sample dependent hypothesis spaces ⋮ Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ ERM learning with unbounded sampling ⋮ Concentration estimates for learning with unbounded sampling ⋮ Consistency analysis of spectral regularization algorithms ⋮ Indefinite kernel network with \(l^q\)-norm regularization ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Learning Bounds for Kernel Regression Using Effective Data Dimensionality ⋮ Robust pairwise learning with Huber loss ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Least-square regularized regression with non-iid sampling ⋮ A note on application of integral operator in learning theory ⋮ Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression ⋮ INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Shannon sampling. II: Connections to learning theory ⋮ Thresholded spectral algorithms for sparse approximations
Cites Work
- An Efron-Stein inequality for nonsymmetric statistics
- Stability results for scattered-data interpolation on Euclidean spheres
- The jackknife estimate of variance
- Error estimates for interpolation by compactly supported radial basis functions of minimal degree
- Rates of convergence for minimum contrast estimators
- Information-theoretic determination of minimax rates of convergence
- Approximation properties of zonal function networks using scattered data on the sphere
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- 10.1162/153244302760200704
- On the dual formulation of regularized linear systems with convex risks
This page was built for publication: Leave-One-Out Bounds for Kernel Methods