scientific article
From MaRDI portal
Publication:2880932
zbMath1235.68207MaRDI QIDQ2880932
Dao-Hong Xiang, Ding-Xuan Zhou
Publication date: 17 April 2012
Full work available at URL: http://www.jmlr.org/papers/v10/xiang09a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
approximationcovering numberreproducing kernel Hilbert spacebinary classificationgeneral convex lossvarying Gaussian kernels
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (23)
Learning with sample dependent hypothesis spaces ⋮ Learning rates of kernel-based robust classification ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ A STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVM ⋮ Conditional quantiles with varying Gaussians ⋮ Employing different loss functions for the classification of images via supervised learning ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ A new comparison theorem on conditional quantiles ⋮ Calibration of \(\epsilon\)-insensitive loss in support vector machines regression ⋮ Logistic classification with varying gaussians ⋮ Learning from non-identical sampling for classification ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Covering numbers of Gaussian reproducing kernel Hilbert spaces ⋮ Unregularized online algorithms with varying Gaussians ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Learning rates for the risk of kernel-based quantile regression estimators in additive models ⋮ Learning rates of regression with q-norm loss and threshold ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Comparison theorems on large-margin learning
This page was built for publication: