Rethinking statistical learning theory: learning using statistical invariants
DOI10.1007/S10994-018-5742-0zbMath1480.62064OpenAlexW2884692446WikidataQ129492950 ScholiaQ129492950MaRDI QIDQ669285
Vladimir Vapnik, Rauf Izmailov
Publication date: 15 March 2019
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-018-5742-0
classificationweak convergenceneural networklearning theorykernel functionconditional probabilityill-posed problemregressionreproducing kernel Hilbert spacesupport vector machineprivileged informationintelligent teacher
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Computational learning theory (68Q32) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Knowledge representation (68T30)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonparametric methods for reconstructing probability densities
- Knowledge transfer in SVM and neural networks
- Theorie der Zeichenerkennung
- The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959
This page was built for publication: Rethinking statistical learning theory: learning using statistical invariants