Analysis of support vector machines regression
From MaRDI portal
Publication:1022433
DOI10.1007/s10208-008-9026-0zbMath1185.68577OpenAlexW1982453328MaRDI QIDQ1022433
Di-Rong Chen, Hongzhi Tong, Li Zhong Peng
Publication date: 22 June 2009
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-008-9026-0
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (20)
The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ Online learning for quantile regression and support vector regression ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Support vector machines regression with unbounded sampling ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Learning with varying insensitive loss ⋮ Learning rate of support vector machine for ranking ⋮ Calibration of \(\epsilon\)-insensitive loss in support vector machines regression ⋮ Posterior consistency of semi-supervised regression on graphs ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ The convergence rate for a \(K\)-functional in learning theory ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ Learning rates of multi-kernel regularized regression ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Regularized ranking with convex losses and \(\ell^1\)-penalty ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Cites Work
- Model selection for regularized least-squares algorithm in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Regularization networks and support vector machines
- Consistency and robustness of kernel-based regression in convex risk minimization
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Capacity of reproducing kernel spaces in learning theory
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- 10.1162/153244302760200704
- Are Loss Functions All the Same?
- Learning Theory
- Theory of Reproducing Kernels
- Robust Statistics
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Analysis of support vector machines regression