Optimal regression rates for SVMs using Gaussian kernels
From MaRDI portal
Publication:1951100
DOI10.1214/12-EJS760zbMath1337.62073WikidataQ59196380 ScholiaQ59196380MaRDI QIDQ1951100
Publication date: 29 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.ejs/1357913280
Nonparametric regression and quantile regression (62G08) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Stable splittings of Hilbert spaces of functions of infinitely many variables ⋮ An SVM-like approach for expectile regression ⋮ Intrinsic Dimension Adaptive Partitioning for Kernel Methods ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Structure learning via unstructured kernel-based M-estimation ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Density-Difference Estimation ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Filtering with State-Observation Examples via Kernel Monte Carlo Filter ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Learning rates for kernel-based expectile regression ⋮ Optimal learning with anisotropic Gaussian SVMs ⋮ The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs ⋮ A closer look at covering number bounds for Gaussian kernels ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Moving quantile regression ⋮ Adaptive learning rates for support vector machines working on data with low intrinsic dimension ⋮ Multikernel Regression with Sparsity Constraint ⋮ Unnamed Item ⋮ Interpretable Dynamic Treatment Regimes ⋮ Optimal learning with Gaussians and correntropy loss
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimating conditional quantiles with the help of the pinball loss
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Learning and approximation by Gaussians on Riemannian manifolds
- Theory of function spaces
- Fast rates for support vector machines using Gaussian kernels
- Global nonparametric estimation of conditional quantile functions and their derivatives
- A distribution-free theory of nonparametric regression
- Optimal rates for the regularized least-squares algorithm
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Error bounds for learning the kernel
- Interpolation of Besov Spaces
- Quantitative Korovkin Theorems for Positive Linear Operators on L p - Spaces
- SMO Algorithm for Least-Squares SVM Formulations
- EFFICIENT SEMIPARAMETRIC ESTIMATION OF A PARTIALLY LINEAR QUANTILE REGRESSION MODEL
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Function Classes That Approximate the Bayes Risk
- Quantile Regression in Reproducing Kernel Hilbert Spaces
- Theory of Reproducing Kernels