Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: a matrix inequality approach
From MaRDI portal
Publication:395779
DOI10.1186/1029-242X-2013-68zbMath1358.68229WikidataQ59293933 ScholiaQ59293933MaRDI QIDQ395779
Publication date: 30 January 2014
Published in: Journal of Inequalities and Applications (Search for Journal in Brave)
disturbancematrix inequalitydynamic neural network modelsexponential peak-to-peak norm performancetraining law
Learning and adaptive systems in artificial intelligence (68T05) Stability theory of functional-differential equations (34K20)
Cites Work
- Some new results on stability of Takagi-Sugeno fuzzy Hopfield neural networks
- Exponential \(\mathcal H_{\infty}\) stable learning method for Takagi-Sugeno fuzzy delayed neural networks: a convex optimization approach
- Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay
- Identification of nonlinear dynamical systems using multilayered neural networks
- New necessary and sufficient conditions for absolute stability of neural networks
- An \(\mathcal H_{\infty}\) approach to stability analysis of switched Hopfield neural networks with time-delay
- Neural networks for control systems - a survey
- \(\mathcal L_{2}-\mathcal L_{\infty }\) nonlinear system identification via recurrent neural networks
- Robust stability of recurrent neural networks with ISS learning algorithm
- Multiobjective output-feedback control via LMI optimization
- A simple proof of a necessary and sufficient condition for absolute stability of symmetric neural networks
- Input-to-state stability (ISS) analysis for dynamic neural networks
- Lur'e systems with multilayer perceptron and recurrent neural networks: absolute stability and dissipativity
- Some stability properties of dynamic neural networks
- Adaptive control of unknown plants using dynamical neural networks
- A linear matrix inequality approach to peak-to-peak gain minimization
This page was built for publication: Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: a matrix inequality approach