A learning rule for very simple universal approximators consisting of a single layer of perceptrons
From MaRDI portal
Publication:1932023
DOI10.1016/j.neunet.2007.12.036zbMath1254.68196OpenAlexW2043951469WikidataQ34746027 ScholiaQ34746027MaRDI QIDQ1932023
Peter Auer, Harald Burgsteiner, Wolfgang Maass
Publication date: 17 January 2013
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2007.12.036
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (6)
Error-probability noise benefits in threshold neural signal detection ⋮ On the generalization error of fixed combinations of classifiers ⋮ A global neural network learning machine: coupled integer and fractional calculus operator with an adaptive learning scheme ⋮ Direct kernel perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation ⋮ SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks ⋮ Unnamed Item
Uses Software
Cites Work
- On the generalization error of fixed combinations of classifiers
- Large margin classification using the perceptron algorithm
- The Perceptron: A Model for Brain Functioning. I
- Fading memory and the problem of approximating nonlinear operators with Volterra series
- Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
- On-line learning in the committee machine
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A learning rule for very simple universal approximators consisting of a single layer of perceptrons