Mathematical Research Data Initiative
Main page
Recent changes
Random page
Help about MediaWiki
Create a new Item
Create a new Property
Merge two items
In other projects
Discussion
View source
View history
Purge
English
Log in

New results on input-to-state convergence for recurrent neural networks with variable inputs

From MaRDI portal
Publication:1003222
Jump to:navigation, search

DOI10.1016/J.NONRWA.2007.03.019zbMath1154.34339OpenAlexW1992332809MaRDI QIDQ1003222

Yunxia Guo

Publication date: 27 February 2009

Published in: Nonlinear Analysis. Real World Applications (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.nonrwa.2007.03.019


zbMATH Keywords

Lyapunov functionsrecurrent neural networksinput-to-state convergence


Mathematics Subject Classification ID

Neural networks for/in biological studies, artificial life and related topics (92B20) Global stability of solutions to ordinary differential equations (34D23) Asymptotic properties of solutions to ordinary differential equations (34D05)


Related Items (1)

Stability analysis of linear time-varying time-delay systems by non-quadratic Lyapunov functions with indefinite derivatives




Cites Work

  • Delay structure conditions for identifiability of closed loop systems
  • Analysis and design of a recurrent neural network for linear programming
  • New characterizations of input-to-state stability
  • Smooth stabilization implies coprime factorization
  • Unnamed Item
  • Unnamed Item




This page was built for publication: New results on input-to-state convergence for recurrent neural networks with variable inputs

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:1003222&oldid=12990669"
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
MaRDI portal item
This page was last edited on 30 January 2024, at 20:40.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki