Approximation with polynomial kernels and SVM classifiers
From MaRDI portal
Publication:2498387
DOI10.1007/s10444-004-7206-2zbMath1095.68103OpenAlexW1996388931MaRDI QIDQ2498387
Publication date: 16 August 2006
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-004-7206-2
classification algorithmmisclassification errorsupport vector machinepolynomial kernelregularization schemeapproximation by Durrmeyer operators
Related Items (44)
Multivariate weighted Kantorovich operators ⋮ Nonparametric regression using needlet kernels for spherical data ⋮ Learning by atomic norm regularization with polynomial kernels ⋮ Learning rates of kernel-based robust classification ⋮ Distributed learning via filtered hyperinterpolation on manifolds ⋮ Parameters estimation in Ebola virus transmission dynamics model based on machine learning ⋮ Learning rates of regularized regression on the unit sphere ⋮ Approximation on variable exponent spaces by linear integral operators ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Modal additive models with data-driven structure identification ⋮ Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Representations for the inverses of certain operators ⋮ Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression ⋮ Fourier analysis on trapezoids with curved sides ⋮ Uniform convergence of Bernstein-Durrmeyer operators with respect to arbitrary measure ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices ⋮ Analysis of approximation by linear operators on variable \(L_\rho^{p(\cdot)}\) spaces and applications in learning theory ⋮ Strong inequalities for Hermite-Fejér interpolations and characterization of \(K\)-functionals ⋮ Bernstein-Durrmeyer operators with respect to arbitrary measure. II: Pointwise convergence ⋮ Derivative reproducing properties for kernel methods in learning theory ⋮ Optimal rate of the regularized regression learning algorithm ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Almost optimal estimates for approximation and learning by radial basis function networks ⋮ The convergence rate for a \(K\)-functional in learning theory ⋮ Multivariate Bernstein-Durrmeyer operators with arbitrary weight functions ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS ⋮ Operators of Durrmeyer Type with Respect to Arbitrary Measure ⋮ Error analysis of multicategory support vector machine classifiers ⋮ Pointwise convergence of the Bernstein-Durrmeyer operators with respect to a collection of measures ⋮ Analysis of support vector machines regression ⋮ The learning rates of regularized regression based on reproducing kernel Banach spaces ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Learning rates of least-square regularized regression with polynomial kernels ⋮ Estimates of learning rates of regularized regression via polyline functions ⋮ Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere ⋮ Approximation by max-product sampling Kantorovich operators with generalized kernels
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Support vector machines are universally consistent
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Support-vector networks
- Regularization networks and support vector machines
- Shannon sampling. II: Connections to learning theory
- On the mathematical foundations of learning
- 10.1162/153244302760185252
- Capacity of reproducing kernel spaces in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Structural risk minimization over data-dependent hierarchies
- 10.1162/153244302760200704
- Shannon sampling and function reconstruction from point values
- Learning Theory
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Approximation with polynomial kernels and SVM classifiers