A learning result for continuous-time recurrent neural networks
From MaRDI portal
Publication:1274412
DOI10.1016/S0167-6911(98)00006-1zbMath0909.93011OpenAlexW1978418345MaRDI QIDQ1274412
No author found.
Publication date: 12 January 1999
Published in: Systems \& Control Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0167-6911(98)00006-1
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) System identification (93B30)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Turing computability with neural nets
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Vapnik-Chervonenkis dimension of recurrent neural networks
- State observability in recurrent neural networks
- Complete controllability of continuous-time recurrent neural networks
- Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
- Sample complexity for learning recurrent perceptron mappings
This page was built for publication: A learning result for continuous-time recurrent neural networks