Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Leave-One-Out Bounds for Kernel Methods - MaRDI portal

Leave-One-Out Bounds for Kernel Methods

From MaRDI portal
Publication:4816853

DOI10.1162/089976603321780326zbMath1085.68144OpenAlexW2044421224MaRDI QIDQ4816853

Tong Zhang

Publication date: 14 September 2004

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1162/089976603321780326




Related Items (31)

Coefficient regularized regression with non-iid samplingConsistency and generalization bounds for maximum entropy density estimationLeast-squares regularized regression with dependent samples andq-penaltyERM learning algorithm for multi-class classificationHermite learning with gradient dataRegularized least square regression with dependent samplesUnnamed ItemLearning with sample dependent hypothesis spacesSharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernelsIntegral operator approach to learning theory with unbounded samplingLearning with coefficient-based regularization and \(\ell^1\)-penaltyOptimal learning rates for least squares regularized regression with unbounded samplingLeast square regression with indefinite kernels and coefficient regularizationOptimality of regularized least squares ranking with imperfect kernelsERM learning with unbounded samplingConcentration estimates for learning with unbounded samplingConsistency analysis of spectral regularization algorithmsIndefinite kernel network with \(l^q\)-norm regularizationKernel gradient descent algorithm for information theoretic learningLearning Bounds for Kernel Regression Using Effective Data DimensionalityRobust pairwise learning with Huber lossConcentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spacesLeast-square regularized regression with non-iid samplingA note on application of integral operator in learning theoryReproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square RegressionINDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLINGHalf supervised coefficient regularization for regression learning with unbounded samplingUnnamed ItemUnnamed ItemShannon sampling. II: Connections to learning theoryThresholded spectral algorithms for sparse approximations



Cites Work


This page was built for publication: Leave-One-Out Bounds for Kernel Methods