Accelerating a recurrent neural network to finite-time convergence using a new design formula and its application to time-varying matrix square root
DOI10.1016/j.jfranklin.2017.06.012zbMath1395.93353OpenAlexW2728677037MaRDI QIDQ1661241
Publication date: 16 August 2018
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jfranklin.2017.06.012
finite-time convergencedesign formularecurrent neural network accelerationtime-varying matrix square root
Learning and adaptive systems in artificial intelligence (68T05) Design techniques (robust design, computer-aided design, etc.) (93B51) Discrete-time control/observation systems (93C55)
Related Items (16)
Cites Work
- A note on square roots of nonnegative matrices
- Finite-time consensus of second-order multi-agent systems via auxiliary system approach
- A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion
- On square roots and norms of matrices with symmetry properties
- On exponential convergence of nonlinear gradient dynamics system with application to square root finding
- Stable iterations for the matrix square root
- Time-varying square roots finding via Zhang dynamics versus gradient dynamics and the former's link and new explanation to Newton-Raphson iteration
- \(\mathcal H_\infty\) state estimation for memristive neural networks with time-varying delays: the discrete-time case
- Zeroing Dynamics, Gradient Dynamics, and Newton Iterations
- Uniqueness of matrix square roots and an application
This page was built for publication: Accelerating a recurrent neural network to finite-time convergence using a new design formula and its application to time-varying matrix square root