Elman Backpropagation as Reinforcement for Simple Recurrent Networks
From MaRDI portal
Publication:5441309
DOI10.1162/neco.2007.19.11.3108zbMath1143.68539OpenAlexW2061043032WikidataQ51905507 ScholiaQ51905507MaRDI QIDQ5441309
Publication date: 11 February 2008
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: http://epubs.surrey.ac.uk/22857/23/neco.2007.19.11.3108.pdf
Related Items (2)
Supervised Learning in Multilayer Spiking Neural Networks ⋮ Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks
Cites Work
- Dynamical recognizers: real-time language recognition by analog computers
- The calculi of emergence: Computation, dynamics and induction
- Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting
- Stack-like and queue-like dynamics in recurrent neural networks
- Synaptic noise as a means of implementing weight-perturbation learning
- Recurrent Neural Networks with Small Weights Implement Definite Memory Machines
- The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx
- Attention-Gated Reinforcement Learning of Internal Representations for Classification
- Predicting the future of discrete sequences from fractal representations of the past
This page was built for publication: Elman Backpropagation as Reinforcement for Simple Recurrent Networks