Global exponential convergence and stability of gradient-based neural network for online matrix inversion (Q734897)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Global exponential convergence and stability of gradient-based neural network for online matrix inversion |
scientific article; zbMATH DE number 5614868
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Global exponential convergence and stability of gradient-based neural network for online matrix inversion |
scientific article; zbMATH DE number 5614868 |
Statements
Global exponential convergence and stability of gradient-based neural network for online matrix inversion (English)
0 references
14 October 2009
0 references
Starting point of online matrix inversions is the equation \(AX-I=0\) and the recurrent gradient based neural network \(\dot{X}(t) = -\gamma (AX(t)-I)\) for a design parameter \(\gamma\). To prove convergence and stability of this neural network, the authors use the Lyapunov function \(E(t) = \) tr\((\tilde{X}^TA^TA\tilde{X})/2\) for \(\tilde{X}(t) = X(t) - X^*\) and the exact inverse \(X^*\). For nonsingular \(A\) this neural network is shown to be exponentially convergent for any start \(X(0)\), while for singular \(A\), the method is still globally stable. In experiments this convergence behavior is verified and in case of singular \(A\), the computed solution \(X\) is found to equal the Moore-Penrose pseudoinverse of \(A\).
0 references
matrix inverse
0 references
online matrix inversion
0 references
neural network
0 references
Lyapunov function
0 references
stability
0 references
exponential convergence
0 references
numerical examples
0 references
Moore-Penrose pseudoinverse
0 references
0 references
0 references
0 references