Learning in compressed space
From MaRDI portal
Publication:2510741
DOI10.1016/J.NEUNET.2013.01.020zbMath1293.68221DBLPjournals/nn/FabischKWK13OpenAlexW2027138707WikidataQ45959503 ScholiaQ45959503MaRDI QIDQ2510741
Hendrik Wöhrle, Frank Kirchner, Yohannes Kassahun, Alexander Fabisch
Publication date: 4 August 2014
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2013.01.020
supervised learningartificial neural networksreinforcement learningcompressed sensingmodel compression
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A simple proof of the restricted isometry property for random matrices
- Multilayer feedforward networks are universal approximators
- Extensions of Lipschitz mappings into a Hilbert space
- Learning Deep Architectures for AI
- Decoding by Linear Programming
- Neuroevolution strategies for episodic reinforcement learning
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- 10.1162/153244303322753616
- Learning representations by back-propagating errors
- A method for the solution of certain non-linear problems in least squares
- Compressed sensing
This page was built for publication: Learning in compressed space