scientific article
From MaRDI portal
Publication:3174075
zbMath1222.68339MaRDI QIDQ3174075
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v8/ying07a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
learning theoryGlivenko-Cantelli classGaussian kernelregularization schemeempirical covering numberflexible variances
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (35)
Learning with sample dependent hypothesis spaces ⋮ The optimal solution of multi-kernel regularization learning ⋮ Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies ⋮ Multi-kernel regularized classifiers ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Summation of Gaussian shifts as Jacobi's third theta function ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Conditional quantiles with varying Gaussians ⋮ Error bounds for \(l^p\)-norm multiple kernel learning with least square loss ⋮ Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Convergence analysis of online algorithms ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Orthogonality from disjoint support in reproducing kernel Hilbert spaces ⋮ Parzen windows for multi-class classification ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Learning the coordinate gradients ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Learning rates of multi-kernel regularized regression ⋮ Approximation of kernel matrices by circulant matrices and its application in kernel selection methods ⋮ Nonlinear approximation using Gaussian kernels ⋮ Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory ⋮ Unregularized online algorithms with varying Gaussians ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Error Estimates for Multivariate Regression on Discretized Function Spaces ⋮ Least square regularized regression for multitask learning ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ Error bounds for learning the kernel ⋮ Unnamed Item ⋮ High order Parzen windows and randomized sampling ⋮ Online Classification with Varying Gaussians ⋮ Optimal learning with Gaussians and correntropy loss
This page was built for publication: