Statistical performance of support vector machines
From MaRDI portal
Publication:2426613
DOI10.1214/009053607000000839zbMath1133.62044arXiv0804.0551OpenAlexW1983563203MaRDI QIDQ2426613
Pascal Massart, Gilles Blanchard, Olivier Bousquet
Publication date: 23 April 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0804.0551
Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Neural nets and related approaches to inference from stochastic processes (62M45) Applications of operator theory in probability theory and statistics (47N30)
Related Items
Some theoretical results regarding the polygonal distribution, A penalized criterion for variable selection in classification, Spatio-temporal convolution kernels, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), The new interpretation of support vector machines on statistical learning theory, Regularization in kernel learning, Optimal dyadic decision trees, Unnamed Item, Unnamed Item, Classifiers of support vector machine type with \(\ell_1\) complexity regularization, Fast rates for empirical vector quantization, Multi-kernel regularized classifiers, Inverse statistical learning, Universally consistent vertex classification for latent positions graphs, Sparsity in multiple kernel learning, Fast convergence rates of deep neural networks for classification, Unnamed Item, Unnamed Item, Robust machine learning by median-of-means: theory and practice, A Reproducing Kernel Hilbert Space Framework for Functional Classification, Consistency and convergence rate for nearest subspace classifier, Consistency of support vector machines using additive kernels for additive models, Oracle properties of SCAD-penalized support vector machine, Estimating conditional quantiles with the help of the pinball loss, Penalized empirical risk minimization over Besov spaces, Learning Rates for Classification with Gaussian Kernels, Classification with non-i.i.d. sampling, Comment, Simultaneous adaptation to the margin and to complexity in classification, Learning rates for kernel-based expectile regression, Cox process functional learning, Optimal rates of aggregation in classification under low noise assumption, Learning from dependent observations, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Unnamed Item, Concentration estimates for the moving least-square method in learning theory, Fast learning rates for plug-in classifiers, Oracle inequalities for support vector machines that are based on random entropy numbers, Adaptive metric dimensionality reduction, Bandwidth selection in kernel empirical risk minimization via the gradient, Comment, Measuring the Capacity of Sets of Functions in the Analysis of ERM, Theory of Classification: a Survey of Some Recent Advances, On Reject and Refine Options in Multicategory Classification, Optimal weighted nearest neighbour classifiers, Large‐margin classification with multiple decision rules, Fast generalization error bound of deep learning without scale invariance of activation functions, Convergence rates of generalization errors for margin-based classification, Asymptotic normality of support vector machine variants and other regularized kernel methods, Nonasymptotic bounds for vector quantization in Hilbert spaces, Estimating Individualized Treatment Rules Using Outcome Weighted Learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk bounds for statistical learning
- Fast rates for support vector machines using Gaussian kernels
- A Bennett concentration inequality and its application to suprema of empirical processes
- Empirical margin distributions and bounding the generalization error of combined classifiers
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Support vector machines are universally consistent
- Complexity regularization via localized random penalties
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Regularization networks and support vector machines
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Simultaneous adaptation to the margin and to complexity in classification
- Empirical minimization
- Square root penalty: Adaption to the margin in classification and in edge estimation
- Local Rademacher complexities
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- On the Eigenspectrum of the Gram Matrix and the Generalization Error of Kernel-PCA
- Capacity of reproducing kernel spaces in learning theory
- A new concentration result for regularized risk minimizers
- Scale-sensitive dimensions, uniform convergence, and learnability
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Minimax nonparametric classification .I. Rates of convergence
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Learning Theory
- 10.1162/153244302760200704
- 10.1162/1532443041424337
- 10.1162/1532443041424319
- 10.1162/153244303321897690
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convexity, Classification, and Risk Bounds
- Some applications of concentration inequalities to statistics