Classification with non-i.i.d. sampling
From MaRDI portal
Publication:652859
DOI10.1016/j.mcm.2011.03.042zbMath1228.62074OpenAlexW1972935639MaRDI QIDQ652859
Publication date: 18 December 2011
Published in: Mathematical and Computer Modelling (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.mcm.2011.03.042
learning theoryreproducing kernel Hilbert spaces\(\ell ^{2}\)-empirical covering number\(\beta \)-mixing sequencecapacity dependent error boundsregularized classification
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Artificial intelligence (68T99) Applications of functional analysis in probability theory and statistics (46N30)
Related Items (6)
Error analysis of the moving least-squares method with non-identical sampling ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ ERM scheme for quantile regression ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Unnamed Item ⋮ Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling
Cites Work
- Unnamed Item
- Unnamed Item
- Learning from non-identical sampling for classification
- Multi-kernel regularized classifiers
- Learning from dependent observations
- Basic properties of strong mixing conditions. A survey and some open questions
- Fast rates for support vector machines using Gaussian kernels
- Rates of convergence for empirical processes of stationary mixing sequences
- A Bennett concentration inequality and its application to suprema of empirical processes
- Learning and generalisation. With applications to neural networks.
- The covering number in learning theory
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Weak convergence and empirical processes. With applications to statistics
- Least-square regularized regression with non-iid sampling
- Statistical performance of support vector machines
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Learning theory estimates via integral operators and their approximations
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Minimum complexity regression estimation with weakly dependent observations
- Improving the sample complexity using global data
- Convexity, Classification, and Risk Bounds
This page was built for publication: Classification with non-i.i.d. sampling