Convergence rates of learning algorithms by random projection
From MaRDI portal
Publication:2252501
DOI10.1016/j.acha.2013.09.003zbMath1346.68155OpenAlexW2073196450MaRDI QIDQ2252501
Publication date: 18 July 2014
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2013.09.003
Learning and adaptive systems in artificial intelligence (68T05) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Model selection for regularized least-squares algorithm in learning theory
- A simple proof of the restricted isometry property for random matrices
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Regularization networks and support vector machines
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Extensions of Lipschitz mappings into a Hilbert space
- Learning Theory
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- On the Performance of Clustering in Hilbert Spaces
- Probability with Martingales
- An elementary proof of a theorem of Johnson and Lindenstrauss
- 10.1162/153244303321897690
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
- Compressed sensing